OpenAI's ChatGPT alone hit 500 million weekly active users in March, apparently they're closer to 800 million now. I guess they're still working out the monetization strategy, but in the worst case just think of how Google makes their revenue off search..
The first one does, then prompt caching kicks in.. turns out many people ask similar questions. People who frequently ask complicated questions might have to pay extra, we can already see this playing out.
Also, most ChatGPT users have their “personalization” prefix in the system prompt (which contains things like date/time), which would break caching of the actual user-query.
The prompt has to be precisely the same for that to work (and of course now you have to have an embedding hashmap which is its own somewhat advanced problem.) I doubt they do that especially given the things I've heard from API users.
In the recent Sam Altman interview he said the plan should be keep burning fossil fuels to power the data centers running AI because that’s the path to fusion. Just like LLM can help devs code 100x faster they can do that for nuclear engineers too.
Fusion seems short-sighted though. Antimatter is 100% efficient. I personally think Sam Altman should be looking into something like an Infinite Improbability Drive as it would would be a better fit here.
The pro-singularity/AGI people genuinely seem to believe that takeoff is going to happen within the next decade, so they should get a pass on the "haha they're saying that because they want to pander to Trump" accusations.
> The pro-singularity/AGI people genuinely seem to believe that takeoff is going to happen within the next decade
I'm as anti-AI as it can get - it has its uses, but it is still fundamentally built on outright sharting on all kinds of ethics, and that's just the training phase - the actual usage is filled with even more snake-oil salesmen and fraudsters, and that's not to speak of all the jobs for humans that are going to be irreversibly replaced by AI.
But I think the AGI people are actually correct in their assumption - somewhen the next 10-20 years, the AGI milestone will be hit. Most probably not on LLM basis, but it will hit. And societies are absolutely not prepared to deal with the fallout, quite the contrary - particularly the current US administration is throwing us all in front of the multibillionaire wolves.
> somewhen the next 10-20 years, the AGI milestone will be hit
You seem quite confident for a person who doesn't offer any arguments on why it would happen at all, and why within two decades specifically, especially if you claim it won't be LLM-based.
Second, if AGI means that ChatGPT doesn't hallucinate and has a practically infinite context window, that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention. We'll adapt just like we adapted to using LLMs.
> You seem quite confident for a person who doesn't offer any arguments on why it would happen at all, and why within two decades specifically, especially if you claim it won't be LLM-based.
Rather sooner than later, IMHO the sheer amount of global compute capacity available will be enough to achieve that task. Brute force, basically. Doesn't take much imagination other than looking at how exponential curves work.
> that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention.
A decent-enough AI, especially an AGI, will displace a lot of white collar workers - creatives are already getting hit hard and that is with AI still not being able to paint realistic fingers, and the typical "paper pusher" jobs will also be replaced by AI. In the "meatspace", aka robots doing tasks that are _for now_ not achievable by robots (say because the haptic feedback is lacking) there has been pretty impressive research happening over the last years. So that's a lot of blue collar / trades jobs going to go away as well when the mechanical bodies are linked up to an AI control system.
> We'll adapt just like we adapted to using LLMs.
Yeah, we just stuck the finger towards those affected. That's not adaptation, that's leaving people to be eaten by the wolves.
We're fast heading for a select few megacorporations holding all the power when it comes to AI, and everyone else will be serfs or outright slaves to them instead of the old scifi dreams where humans would be able to chill out and relax all day.
> Rather sooner than later, IMHO the sheer amount of global compute capacity available will be enough to achieve that task. Brute force, basically. Doesn't take much imagination other than looking at how exponential curves work.
Only assuming there is something to be found apart from the imagination itself. We can imagine AGI easily but it doesn't mean it exists, and even if it does, that we will discover it. By that logic - we want something and we spent a lot of compute resources on it - the success of a project like SETI would be guaranteed based on funding alone.
In other words, there is a huge gap between something that we are sure can be done, but it requires a lot of resources, like a round trip to Mars, and we can even speculate it can be done within 10-20 years (and still be wrong by a couple of decades) on the one hand, and something we just hope to discover based on the amount of GPUs available, without slightest clue of success other than funding and our desire for it to happen.
The thing is, for economic devastation you don't (necessarily) need to have an actually "general" intelligence that's able to do creative tasks - and the ethical question remains if "creative humans" aren't just a meat based PRNG.
A huge amount of public service and corporate clerkwork is served enough by an AI capable enough of understanding paperwork and applying a well-known set of rules against it. Say, a building permit application - an AI to replace a public service has to be able to actually read a construction plan, cross-reference it with building codes and zoning and check the math (e.g. statics). We're not quite there yet, with an emphasis on the yet - especially, at the moment even AI composition with agents calling specialized AI models can't reliably detect when it doesn't have enough input or knowledge and just hallucinates.
But once this fundamental issue is solved, it's game over for clerkwork - even assuming the pareto principle (aka, the first 80% are easy, only the remaining 20% are tough), that will cut 80% of employees and, with it, the managerial layers above. In the US alone, about 20 million people work in public service. Take 50% of that (to account for jobs that need a physical human, such as security guards, police and whatnot), gives 10 million jobs for clerkwork, take 80% of that and you got 8 million unemployed people, alone in government. There's no way any social safety net can absorb that much of an impact, and as said, that's government alone - the private sector employs about 140 million people, do the calculation for that number and you got 56 million people out of a job.
That is what is scaring me because other than "AI doomers" no one seems to have that issue even on their radar on the Democrat side, and the Republicans want to axe all regulations on AI.
> without slightest clue of success other than funding and our desire for it to happen
The problem is, money is able to brute-force progress. And there is a lot of money floating around in AI these days, enough to actually make progress.
Ah I see your point, and I agree. We've seen how it plays out in places where greedy entrepreneurs brought waves of immigrants to do sub-minimal-wages work and what effects it had on the society, so I agree about the consequences.
However, at least for LLMs, the progress slowed down considerably so we're now at the place where they are a useful extension of a toolkit and not a replacement. Will it change dramatically in 20 years? Possibly, but that's enough time to give people a chance to adapt. (With a huge disclaimer: if history taught me anything, it is that all predictions are as useful as a coin toss.)
> Will it change dramatically in 20 years? Possibly, but that's enough time to give people a chance to adapt.
Yeah, but for that, politicians need to prepare as well, and they don't. All that many of today's politicians care about is about getting reelected or at the very least lining their pockets. In Germany, we call this "nach uns die Sintflut" [1], roughly translated to "after us, the floods may come".
Here in Germany, we at least have set up programs to phase out coal over decades, but that was for a few hundred thousand workers - not even close to the scale that's looming over us with AI.
And who will buy the products and serivces of these "employers" when nobody has a job?
See you can keep adding middle layers, but eventually you'll find there's no one with any money at the bottom of this pyramid to prop this whole thing up.
When the consumer driven economy has no critical mass of consumers, the whole model kinda goes belly up, no?
To be fair to an English speaker reading your name from paper: some native English speakers are taught to read by recognizing words by their first letter and their shape, and skipping the word to later fill in the blanks when they don't recognize the word. The lady may have simply never been taught how to sound out unfamiliar letter combinations, and may have been trying her best to make sense of the unrecognizable mess of letters she saw in front of her.
I always felt that many native English speakers can't really parse a text properly. They seem to react to certain keywords. When the text says something they didn't expect, they often miss it or get confused.
I thought it might be a side-effect of being monolingual and hence having a less explicit understanding of language but seeing how they are taught to read, things make perfectly sense.
It is crazy of much staying power bogus science has in education. Reminds me how the idea of individual learning styles is still popular and though it lacks empirical evidence.
I challenge you to go to China and ask people to make fun of you if you are unable to correctly pronounce half their words. Not because of stupidity but because of a mix of not hearing the subtle difference ("but that's exactly what you said!") and being unable to accurately reproduce a sound that you hear.
As kids, we have the ability to make lots of noises. Kids learning languages keep those skills alive. Over time, we lose that ability for sounds that we don't use regularly, and re-acquiring that capability is really hard.
Eh, they get their revenge. As any Australian of a certain age can tell you TelSTRA could not get it right, for any value of right, without expending an equivalent effort to moving a mountain.
* Step 2: design is "tested" with the users, later we find out the users really had no idea what was going on because they weren't really paying attention. Then the real product is delivered and they are shocked that the changes were made behind their back and without their input.
Doesn't look like they are paywalling existing feature, just some new AI back functionality. Considering how much is costs to run those sorts of features I can understand.
Plus, there are lots of alternative app that are free and easy to download and install.
Delphi is still reasonably popular in some niches. It has a powerful but easy WYSIWYG GUI builder and is close to the hardware without being C++, making it decently popular for tooling for industrial hardware.
Though their website makes me suspect they have given up trying to find new customers and are just building new features for the customers they have
I started with version 1.0. Such an incredibly elegant language. I choose it not to feed the Microsoft Monopoly, but in the end, and after the most incredible f*ck ups by Borland, VB won. I used D7 for many years. Now I use it only as a hoby language.
What do you mean Microsoft bought it? IT was never bought by MS.
Same boat with having a chronic illness, not fatal, but no cure either. It gets tiring wading thru all the snake oil salesmen selling false hope. And it isn’t them directly, because my older family members will hear about it with “have you tried…”