I see this as well. Part of the appeal of any crafting hobby is that it doesn’t matter and you can just mess around, but the flip side is that nobody is breathing down your neck to get it done and you can take the time to realize your vision.
The conditions are either an IPO or achieving AGI. I’d be curious to know how the contract defines AGI. If I recall correctly, the OAI-Microsoft deal just defined it as “AI-shaped tech that can generate $100 billion in annual profits”, which I think is actually close to the correct answer, insofar as we will have AGI when the markets decide we have AGI and not when some set of philosophical criteria seem to be satisfied.
> If I recall correctly, the OAI-Microsoft deal just defined it as “AI-shaped tech that can generate $100 billion in annual profits”, which I think is actually close to the correct answer
So if they hit 100 billion annual then it's AGI but if Kellogg's launches “FrostedFlakes-GPT" and steals 30% of the market it's no longer AGI at 70 billion?
I am much more interested in how headcounts compare to 2019 than to 2025 (let alone 2022). Certainly, this is not a comfort to anyone who is losing their job. But I don’t remember anyone panicking about an unemployment crisis pre-pandemic. A lot of people are getting their lottery ticket taken away, which is less than ideal, but we’ve got a long way to go before breadlines.
They may be victims of their own success here. At a certain point, if you can consistently make perfect images indistinguishable from reality, you’re done improving. All that’s left to do is make it faster or cheaper or better-aligned - but these aren’t going to show up readily in ways the typical user can understand.
One thing I notice is that the voices in video AI are absolute hogwash. Voice AI is great, video AI is great, but AI videos where humans speak give me the feel of really poorly dubbed foreign TV - the timing is not quite right and the facial expressions don’t always match up with the words being spoken.
When OpenAI starts requiring a payment, or showing an ad before it starts translating, will they continue? Or will they use the Google Translate app, which can do this locally? (Or for that matter Gemini or Grok or whatever?)
That’s a fair point. But in most markets you don’t have a half dozen competitors jumping down your throat trying to give you the same service ad-free. Netflix can introduce ads without major quitting because you can’t watch their content elsewhere.
Netflix has a moat in the form of IP licensing restrictions.
Google and Youtube are preinstalled everywhere. Instagrams like 10 minutes old and has a major competitor in TikTok that they had to have eliminated/captured by the US government.
People wouldnt stay with Netflix if there was a cheap, legal alternative with the same content library.
To go vertical they’d need to illustrate the value-add, a problem that the vertical competitors already have. Why use Claude for Accountants at $300/month when regular Claude will do the same thing for much less? The stock answer is that Claude for Accountants keeps your data more secure and doesn’t train on it. But a) I think the enterprise consumer is much less likely to trust a model creator not to stick its hand in the cookie jar than a middleman who needs the trust to survive, and b) the vertical competitors typically don’t use the absolute most up-to-date models in their products anyway, so why not just go open-source and run everything in-house? 6 months is a long time in tech, but it’s the blink of an eye in most white-collar professions.
Once the majority of work at a company can be done by AI, Anthropic has an alternative revenue stream to selling AIs to that company--directly competing with that company with a completely integrated AI system. There's of course many barriers to entry/various advantages of incumbents--but it's possible to see a world in which the company selling the AI has a huge advantage too.
The point is that in this hypothetical you can get public access to Claude Opus 6, but they internally use Claude Opus 7 (Accounting Finetune) which is both cheaper to operate and higher IQ.
So they (or their wholly owned subsidiary) can sell accounting services cheaper than anyone on the outside.
Regarding the diffusion/distillation time, I assume it gets harder to distill in the world where frontier labs don’t give API access to their newest models.
I had this same thought. Seems fairly easy to just put off a strong false signal. If you don’t want anyone to know that you live in Finland, make a point to constantly mention how much you enjoy living in Peru.
At this point the AI labs would pretty much have to form an illegal price fixing cartel in order to jack the prices up, they've been competing to drive down prices for so long.
They'd have to get the Chinese AI labs to go along with that price fixing too.
There's only so much of it to spend before they run out.
I don't pretend to have detailed domain knowledge here, I may have seen other people's GenAI output rather than reality*, but the numbers people are throwing around for this stuff sum to trillions of USD, slightly higher than other (same caveat, perhaps also GenAI output*) claims I've seen about the total supply of money in the global venture capital markets.
* I miss the days when I could make a decent guess as to which websites were reliable and which were BS
For the thousandth time - they. make. a. profit. Inference margin is over 60%, today.
They are spending that money training ever-larger models, so they are cashflow negative, but under almost any sane GAAP treatment that does not allow one to write down all R&D upfront (capital costs of model training), they are profitable.
Should this matter to you? Only if you're making financial decisions that assume that somehow one day the "jig will be up" - i.e. please don't short these stocks when they float, or at least do so very judiciously.
It always makes me laugh when people say this, because its so utterly pointless. That percentage assumes literally no other costs exist besides the direct inference cost.
Even if they quit trying to make better models today, there are a mountain of recurring costs that will never go away. Retraining the models with new data, replacing/upgrading old hardware, enormous infrastructure costs related to maintaining the actual platforms, data collection costs, payroll...
I'm not aware of a single player in the LLM space actually turning a profit, even if they're only providing inference.
Listen carefully to Dario’s public statements; you could just pull his most recent Dwarkesh interview for example - worth a listen in any event.
He is guilty of an engineer’s use of the word profit when he says “we never made a profit.” But he always follows up with the real story — “every model we trained has returned 2-4x in free cashflow, counting R&D and inference”
You could say “the industry is engaged in possibly ruinous competition training ever-larger models and sucking cash to do so, and in fact if anyone stops, they’ll lose forever” and those statements might be true, but to be clear the fact that these companies are posting a loss right now is a FEATURE of how R&D works, one that lets them spend more on a race. It’s not tied to the sort of financial reality accrual accounting is designed to talk about.
While true, the obvious counterpoint is that open-weight models exist, that high-end desktops can run them, that said hardware doesn't yet appear to have reached the end of the road for improvements to both purchase and operational costs, and that even if it had the moment people stop having VC money to constantly churn expensive training runs for new models it suddenly makes sense to etch the weights of whatever is SOTA at that point onto a silicon wafer and run it as a much more efficient hardware circuit without wasting the overhead that comes with software doing the same thing on general-purpose hardware.
Even if the bubble burst while I was writing this comment, even if every single current LLM provider goes the way of pets.com, AltaVista, and GeoCities, that can all happen without ending vibe coding.
Keep in mind that they make large profit on inference. Not enough to make up for losses on training but it won’t be a problem for Chinese labs which will just steal their weights.
Given that they built their businesses on wide spread copyright infringement and licence violations, I couldn't give less of a shit about people turning around and "stealing" from them
reply