In the recent Sam Altman interview he said the plan should be keep burning fossil fuels to power the data centers running AI because that’s the path to fusion. Just like LLM can help devs code 100x faster they can do that for nuclear engineers too.
Fusion seems short-sighted though. Antimatter is 100% efficient. I personally think Sam Altman should be looking into something like an Infinite Improbability Drive as it would would be a better fit here.
The pro-singularity/AGI people genuinely seem to believe that takeoff is going to happen within the next decade, so they should get a pass on the "haha they're saying that because they want to pander to Trump" accusations.
> The pro-singularity/AGI people genuinely seem to believe that takeoff is going to happen within the next decade
I'm as anti-AI as it can get - it has its uses, but it is still fundamentally built on outright sharting on all kinds of ethics, and that's just the training phase - the actual usage is filled with even more snake-oil salesmen and fraudsters, and that's not to speak of all the jobs for humans that are going to be irreversibly replaced by AI.
But I think the AGI people are actually correct in their assumption - somewhen the next 10-20 years, the AGI milestone will be hit. Most probably not on LLM basis, but it will hit. And societies are absolutely not prepared to deal with the fallout, quite the contrary - particularly the current US administration is throwing us all in front of the multibillionaire wolves.
> somewhen the next 10-20 years, the AGI milestone will be hit
You seem quite confident for a person who doesn't offer any arguments on why it would happen at all, and why within two decades specifically, especially if you claim it won't be LLM-based.
Second, if AGI means that ChatGPT doesn't hallucinate and has a practically infinite context window, that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention. We'll adapt just like we adapted to using LLMs.
> You seem quite confident for a person who doesn't offer any arguments on why it would happen at all, and why within two decades specifically, especially if you claim it won't be LLM-based.
Rather sooner than later, IMHO the sheer amount of global compute capacity available will be enough to achieve that task. Brute force, basically. Doesn't take much imagination other than looking at how exponential curves work.
> that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention.
A decent-enough AI, especially an AGI, will displace a lot of white collar workers - creatives are already getting hit hard and that is with AI still not being able to paint realistic fingers, and the typical "paper pusher" jobs will also be replaced by AI. In the "meatspace", aka robots doing tasks that are _for now_ not achievable by robots (say because the haptic feedback is lacking) there has been pretty impressive research happening over the last years. So that's a lot of blue collar / trades jobs going to go away as well when the mechanical bodies are linked up to an AI control system.
> We'll adapt just like we adapted to using LLMs.
Yeah, we just stuck the finger towards those affected. That's not adaptation, that's leaving people to be eaten by the wolves.
We're fast heading for a select few megacorporations holding all the power when it comes to AI, and everyone else will be serfs or outright slaves to them instead of the old scifi dreams where humans would be able to chill out and relax all day.
> Rather sooner than later, IMHO the sheer amount of global compute capacity available will be enough to achieve that task. Brute force, basically. Doesn't take much imagination other than looking at how exponential curves work.
Only assuming there is something to be found apart from the imagination itself. We can imagine AGI easily but it doesn't mean it exists, and even if it does, that we will discover it. By that logic - we want something and we spent a lot of compute resources on it - the success of a project like SETI would be guaranteed based on funding alone.
In other words, there is a huge gap between something that we are sure can be done, but it requires a lot of resources, like a round trip to Mars, and we can even speculate it can be done within 10-20 years (and still be wrong by a couple of decades) on the one hand, and something we just hope to discover based on the amount of GPUs available, without slightest clue of success other than funding and our desire for it to happen.
The thing is, for economic devastation you don't (necessarily) need to have an actually "general" intelligence that's able to do creative tasks - and the ethical question remains if "creative humans" aren't just a meat based PRNG.
A huge amount of public service and corporate clerkwork is served enough by an AI capable enough of understanding paperwork and applying a well-known set of rules against it. Say, a building permit application - an AI to replace a public service has to be able to actually read a construction plan, cross-reference it with building codes and zoning and check the math (e.g. statics). We're not quite there yet, with an emphasis on the yet - especially, at the moment even AI composition with agents calling specialized AI models can't reliably detect when it doesn't have enough input or knowledge and just hallucinates.
But once this fundamental issue is solved, it's game over for clerkwork - even assuming the pareto principle (aka, the first 80% are easy, only the remaining 20% are tough), that will cut 80% of employees and, with it, the managerial layers above. In the US alone, about 20 million people work in public service. Take 50% of that (to account for jobs that need a physical human, such as security guards, police and whatnot), gives 10 million jobs for clerkwork, take 80% of that and you got 8 million unemployed people, alone in government. There's no way any social safety net can absorb that much of an impact, and as said, that's government alone - the private sector employs about 140 million people, do the calculation for that number and you got 56 million people out of a job.
That is what is scaring me because other than "AI doomers" no one seems to have that issue even on their radar on the Democrat side, and the Republicans want to axe all regulations on AI.
> without slightest clue of success other than funding and our desire for it to happen
The problem is, money is able to brute-force progress. And there is a lot of money floating around in AI these days, enough to actually make progress.
Ah I see your point, and I agree. We've seen how it plays out in places where greedy entrepreneurs brought waves of immigrants to do sub-minimal-wages work and what effects it had on the society, so I agree about the consequences.
However, at least for LLMs, the progress slowed down considerably so we're now at the place where they are a useful extension of a toolkit and not a replacement. Will it change dramatically in 20 years? Possibly, but that's enough time to give people a chance to adapt. (With a huge disclaimer: if history taught me anything, it is that all predictions are as useful as a coin toss.)
> Will it change dramatically in 20 years? Possibly, but that's enough time to give people a chance to adapt.
Yeah, but for that, politicians need to prepare as well, and they don't. All that many of today's politicians care about is about getting reelected or at the very least lining their pockets. In Germany, we call this "nach uns die Sintflut" [1], roughly translated to "after us, the floods may come".
Here in Germany, we at least have set up programs to phase out coal over decades, but that was for a few hundred thousand workers - not even close to the scale that's looming over us with AI.