The economics prize is not an actual Nobel prize, but something "inspired" by the Nobel prize. It's little more than a tool to push neoliberal policies to the public, with 34 of the 56 winners tied to the Chicago School of Economics.
How much good work have the people reading this thread had to trash because it didn't align with Q3 OKRs? How much time and energy did they put into garbage solutions because they had to hit a KPI by the last day of June?
This is a great point. We work with CERN on a project, and we're all volunteers, but we work on something we need, and contribute back to it wholeheartedly.
At the end of the day, CERN wants to give away project governance to someone else in the group, because they don't want to be the BDFL of anything they help creating. It allows them to pursue new and shiny things and build them.
There's an air consisting of "it's done when it's done", and "we need this, so we should build this without watering it down", so projects move at a steady pace, but the code and product is always as high quality as possible.
Yes, a day job is easier, because a day job has a very real cap on your renumeration. Work for a company, create a product, have that product generate $50 million p/a in sales? Cool, you're still making the same bi-weekly $3000.
This is literally by design, and if you don't see it, what's going on?
Work for a company, create a product, have that product totally flop and all the investors lose their money? Cool, you're still making the same bi-weekly $3000.
Just to continue the analogy, an employee's monetary down side is limited to the amount of their investment, which is $0, modulo ESPPs and such. Investors can be leveraged and end up losing a multiple > 1 of their initial investment.
It's interesting to watch the arc of Microsoft being the big baddie, transitioning to being a good dude that the open source community has embraced, back to being horrible.
Except they've gone from being horrible to developers, to being horrible to normal, non-technical users.
What you call "AI" is generally named AGI. LLM's are alredy a kind of AI, not just generic enough to fully replace all humans.
We don't know if full AGI can be built using just current technology (like transformers) given enough scale, or if 1 or more fundamental breakthroughs are needed beyond just the scale.
My hypothesis has always been that AGI will arrive roughly when the compute power and model size matches the human brain. That means models of about 100 trillion params, which is not that far away now.
Actually, we really don't. When GPT-3.5 was released, it was a massive surprise to many, exactly because they didn't believe simply scaling up transformers wouldn't end up with something like that.
Now using transformers doesn't mean they have to be assembled like LLM's. There are other ways to stich them together to solve a lot of other problems.
We may very well have the basic types of lego pieces needed to build AGI. We won't know until we try to build all the brain's capacities into a model of size of a few 100 trillion parameters.
And if we actually lack some types of pieces, they may even be available by then.