> even today, ignoring the reality of untapped revenue streams like ChatGPT's 800M advertising eyeballs.
Respectfully, the idea of sticking ads in LLMs is just copium. It's never going to work.
LLMs' unfixable inclination for hallucinations makes this an infinite lawsuit machine. Either the regulators will tear OpenAI to shreds over it, or the advertisers seeing their trademarks hijacked by scammers will do it in their stead. LLMs just cannot be controlled enough for this idea to make sense, even with RAG.
And if we step away from the idea of putting ads in the LLM response, we're left with "stick a banner ad on chatgpt dot com". The exact same scheme as the Dotcom Bubble. Worked real well that time, I hear. "Stick a banner ad on it" was a shit idea in 2000. It's not going to bail out AI in 2025.
The original content that LLMs paraphrase is itself struggling to support itself on ads. The idea that you can steal all those impressions through a service that is orders and orders of magnitude more expensive and somehow turn a profit on those very same ads is ludicrous.
While it didn't work in 2000, "just stick ads on it" does work for Google and Meta, driving over $400B in combined annual advertising revenue. Their model, today, is far more relevant than calling back to antiquated banner advertising models from 25 years ago; you'll have to convince me that Google and Meta's model cannot work for OpenAI, which you have not adequately done.
I will point out that this is contentious, both of these companies are subject to regulatory investigations around their monopolistic practices & the matter that they are pretty much the only companies for which this is profitable.
> Their model, today, is far more relevant than calling back to antiquated banner advertising models from 25 years ago
Hardly. It's fundamentally the same model; Content with an advertisement next to it. Whether that is a literal banner ad or a disguised search result, none of the formfactors are new.
For all the advances in ad-tech, CPMs are still the same old dogshit they were shortly after the dotcom bubble, looking better only because of inflation.
> you'll have to convince me that Google and Meta's model cannot work for OpenAI, which you have not adequately done.
That's the "orders and orders of magnitude more expensive" part. Neither Google Search nor Facebook are that profitable per single ad, they make it up in volume. LLMs are simply more expensive to operate than a search engine or a glorified web forum. Can OpenAI cut down their opex and amortized-cap costs down to less than the half-penny they'd extract with good CPMs? Probably not.
But there's a deeper layer. The "fund AI with ads" model paints a scenario in which OpenAI would have to overtake Google; They need the ad-tech monopoly to push up CPMs or you can cut that half-penny down an order of magnitude.
This is unlikely. To make ChatGPT work as a search engine requires all the infrastructure of a search engine. Ipso-facto they are always more expensive than a standalone search engine.
Yet at the same time, people only care about ChatGPT as search because Google Search is shit now. Were ChatGPT to ever become a serious threat to Google, Google can simply turn off the search-enshittifier for a bit and wipe out ChatGPT's marketshare, and push them into bankruptcy by drawing down CPMs below OpenAI's sustainability level.
>That's the "orders and orders of magnitude more expensive" part.
It's not orders of magnitudes more expensive and if we take the most recent report for the half year, then they need a per quarter ARPU of $8 for their free users to be profitable with billions to spare. That is low. This is not some herculean task. They don't need to 'overtake google' or whatever. They literally don't need to change anything.
You can't average out the userbase like that because the individual usage of the service varies wildly, and advertising revenue is directly tied to amount of usage.
Especially because OpenAI highly inflates user figures.
> It's not orders of magnitudes more expensive
This too is skewed by averaging with users who barely use the service.
>You can't average out the userbase like that because the individual usage of the service varies wildly
Yes you can. This is how Meta, Google et al report their numbers. Obviously I'm not expecting each user to bring in exactly $8. The point is that the value they need to extract from their free users to be profitable is very small and very achievable. You and many people here have completely incorrect notions on how expensive inference is. Inference is cheap, and has been for some time now.
>and advertising revenue is directly tied to amount of usage.
Open AI with 800M weekly active users processes 2.6B messages per day. Google with ~5 billion users processes ~14 billion searches per day.
>This too is skewed by averaging with users who barely use the service.
No it's not. Inference is just not that expensive. Model costs have literally crashed several orders of magnitudes in the last few years. Sure, in 2020, this would be a very serious concern. In 2025, it just isn't.
My point is that for these purposes, users are not fungible. You can't just divide the cost-revenue equation by the amount of users N on both sides.
> No it's not.
If you add a pile of fictious users to the usercount, the apparent average cost-per-user drops as the fictious users do not use the service and do not add their own costs. This lowers the apparent amount of per-user revenue you need.
However, as fictious users also do not generate revenue, this is all smoke and mirrors.
>My point is that for these purposes, users are not fungible. You can't just divide the cost-revenue equation by the amount of users N on both sides.
Again, yes you can if you're simply trying to see the relative level of value you need to extract from your users. It's not a complicated idea. $8 is well below what Google, Meta report. You were wrong. They don't need to reach a high bar. End of story.
>If you add a pile of fictious users to the usercount, the apparent average cost-per-user drops as the fictious users do not use the service and do not add their own costs. This lowers the apparent amount of per-user revenue you need.
As always, nonsensical hypotheticals are just that. Nonsensical.
Not only can the users you're talking about not exist in reality, the numbers being thrown around are literally based on their Weekly active users.
There are multiple ways to do the computation. All of them will show LLMs having unit economics that are at least an order of magnitude better than search engines for the search engine use case[0]. Not multiple orders of magnitude worse like you claim. You're off by at least three orders of magnitude.
Ad-supported LLM Chatbots will be one of the most lucrative businesses ever.
> Looking at where America is right now. It seems to make a downfall.
It's been happening for years now. 'America', the idea, died the moment the 2nd plane hit the towers.
People saw that happen, and were so fearful they immediately opened their hearts to fascism.
2025 is merely the year where all of Bush's fascist policies & Obama/Biden's failure to clean it up metastasized into the overt fascism that hurts everyone in a country & eventually destroys the country itself.
Part of what's concerning here is that the deals are conditional. OpenAI must meet XYZ conditions before cash/stock/etc is transferred, and the conditions are pretty hard to meet.
The money between OpenAI, Nvidia, Oracle, AMD is not circulating. There is no cashflow, only future commitments that may (and quite likely will) collapse. Yet the stock market & media react as if it's a sure thing. Even in the criticisms of these deals, the hype is affirmed.
This is the same problem as Enron's accounting, minus the fraud. (No need for fraudulent accounting when people simply don't read the fine print.)
It seemed quite the opposite. That you knew exactly what you were saying, but using weasel words to imply ICE was acting above board without stating any falsehoods that might be challenged.
The rhetorical trick of my reply being that it forces you to either address to the meat of the subject, or leave the statement uncontested.
But congratulations, conceding the argument rather than ousting yourself as a fascist was the better choice.
> I also don't get why there commiting so much to the future, are they sure of the quality of the products and their demand that much?
It's one big game of musical chairs, and everyone can hear the phonograph slowing down.
OpenAI is making these desperation plays because they've ran out of hype. GPT-5 "bombed", the wider public doesn't believe AI is going to keep getting exponentially better anymore. They're out of options to generate new hype beyond spewing ever larger numbers into the news cycle.
AMD is making this desperation play because soon, once the AI bubble pops, there'll be a flood of cheap unused GPUs & GPU compute. Nobody's going to be buying their new cards when you can get Nvidia's prior gen for pennies on the dollar.
I find it funny how people say GPT-5 "bombed". I noticed a significant improvement in maths and coding with GPT-5. To quantify were I've found the models useful:
- GPT 3.5: Good for finding reference terms. I could not trust anything it said, but it could help me find some general terms in fields I was unfamiliar with.
- GPT 4: Good for cached, obscure knowledge. I generally could trust the stuff it said to be true, but none of its logic or conclusions.
- GPT 4.5: Good for reference proofs/code. I cannot trust its proofs or code, but I can get a decent outline for writing my own.
- GPT 5: Good for directed thinking. I cannot trust it to come up with the best solution on its own, but if I tell it what I'm working on, it's pretty decent at using all the tricks in its repertoire (across many fields) to get me a correct solution. I can trust its proofs or code to be about as correct as my own. My main issues are I cannot trust it to point out confusion or ask me, "is this actually the problem we should be solving here?" My guess is this is mostly a byproduct of shallow human feedback, rather than an actual issue with intelligence (as it will often ask me at the end of spending a bunch of computation if I want to try something mildly different).
For me, GPT 5 is way more useful than the previous models, because I don't have a lot of paper-pushing problems I'm trying to solve. My guess is the wider public may disagree because it's hard to tell the difference between something better at the task than you, and something much better.
I used scare quotes for a reason. It didn't "bomb" in the sense of failing [insert metric], it bombed in the sense that OpenAI needed it to generate exponentially more hype and it just didn't. (And on a lesser level, GPT-5 was supposed to cut OpenAI's costs but has failed to do so)
> I can trust its proofs or code to be about as correct as my own.
I have little to say about this, as I find such claims to be broadly irreplicable. GPT-5 scores better on the metrics, but still has the same "classes" of faults.
Gemini 2.5 was the first breakthrough model, people didn't know how to use it but it's incredibly powerful. GPT5 is the second true breakthrough model, it's ability to deal with math/logic/etc complexity and its depth of knowledge in engineering/science is amazing. Every time I talk to someone who stans Claude and is down on GPT5 I know they're building derivative CRUD apps with simple business logic in Python/Typescript.
On the flip side of it (and where most institutional investors are mentally) is that if OpenAI is to ever achieve AGI, it must invest nearly a trillion dollars towards that effort. We all know LLMs have their limitations, but next phase of AI growth is going to come from OpenAI, Anthropic, Google, maybe even Microsoft, and not some stealth startup. E.g., Only Big Tech can get us to AGI due to sheer massive amounts of investments, not a traditional silicon valley garage startup looking for their Series A. So institutional investors have no choice but to continue to throw money into Big Tech hoping for the Big Payoff, rather than investing in VC funds like 10 years ago.
AMD did this deal because it's literally offering financing to them. OpenAI doesn't have access to capital markets like AMD does. So it's selling off shares of its own stock to finance the purchase of billions of dollars worth of GPUs. And the trick appears to be working since the stock is up 30% today, meaning it has paid for itself and then some.
That “only big tech can solve AGI” bit doesn’t make sense to me - the scale argument was made back when people thought just more scale and more training was gonna keep yielding results.
Now it seems clear that what’s missing is another architectural leap like transformers, likely many different ones. That could come from almost anywhere? Or what makes this something where big tech is the only potential source of innovation?
Yup. LLMs can get arbitrarily good at anything with RL, but RL produces spiky capabilities, and getting LLMs arbitrarily good at things they're not designed for (like reasoning, which is absolutely stupid to do in natural language) is very expensive due to the domain mismatch (as we're seeing in realtime).
Neurosymbolic architectures are the future, but I think LLMs have a place as orchestrators and translators from natural language -> symbolic representation. I'm working on an article that lays out a pretty strong case for a lot of this based on ~30 studies, hopefully I can tighten it up and publish soon.
The barrier of entry is too high for traditional SV startups or a group of folks with a good research idea like transformers. You now need hundreds of billions if not trillions to get access to compute. OpenAI themselves have cornered 40% of global output of DRAM modules. This isn't like 2012, where you could walk into your local BestBuy, get a laptop, open an AWS account, and start a SaaS over the weekend. Even the AI researchers themselves are commanding 7- and 8-figure salaries that rival NFL players.
At best, they can sell their IP to BigTech, who will then commercialize it.
Are you saying you disagree that a new architectural leap is needed and just more compute for training is enough? Or are you saying a new architectural leap is needed and that or those new architectures will only be possible to train with insane amounts of compute?
If the latter I dont understand how you could know that about an innovation that’s not yet been made
I’m saying it is highly likely that the next leap in AI technology will require massive amounts of compute. On the order of tens of billions per year. I’m also saying that there are a small number of companies that would have access to that level of compute (or financial capital).
In other words, it’s is MORE likely that an OpenAI/Google/Microsoft/Grok/Anthropic gets us closer to AGI than a startup we haven’t heard of yet. Simply because BigTech has cornered the market and has a de facto monopoly on compute itself. Even if you had raised $10 billion in VC funding, you literally can not buy GPUs because there is not enough manufacturing capacity in the world to fill your order. Thus, investors know this and capital is flowing to BigTech, rather than VC funds. Which creates the cycle of BigTech getting bigger, and squeezing out VC money for startups.
If it comes from anywhere else but it needs a lot of capital to execute, big tech will just acquire them right? They'll have all the data centers and compute contracts locked up I guess.
no amount of investment is going to make AGI just appear. It's looking more and more like current architectures are a dead end and then it's back to the AI drawing board just like the past 30 years.
The difference this time is that it's global coordinated collusion, and it's not just the superwealthy, it's states that are willing to go all in on this. If you thought the banks were too big to fail, the result here is going to be a nationalization of AI resources and doubling down.
AMD issues new shares and gets a penny (read: effectively zero) back for them.
ALL ELSE BEING EQUAL this means everyone holding AMD has 10% of their equity/value taken away and handed to OpenAI.
But all else is not equal. OpenAI only gets the shares if they buy AMD GPUs. The intent is that this offsets the dilution by making AMD overall more valuable. (This is why the stock price jumped on the announcement) It's a GPU subsidy paid for by AMD's shareholders rather than AMD itself.
The real risk is that this further entangles AMD in the AI bubble. OpenAI already has enormous datacenter construction obligations. The likelihood of them failing to meet these new obligations, and thus this deal falling through or otherwise not materialising, is pretty high. If the AI bubble goes *POP*, AMD will be hurting a lot more than before this deal.
Percentage math doesn't quite work that way. (130% * 90% for gaining 30% and then giving away 10% of that, is 117% not 120%)
But yes. That's the intent.
The "problem" is that OpenAI doesn't have any of the shares yet, and it's unclear how much they actually will get. Right now AMD shareholders have the full +30% gain with none of the loss. But will the +30% gain be wiped out on the news OpenAI won't be buying as many AMD GPUs? Only time can tell.
The first 0.1% of shareholders to sell would get the full +30%, then the next 0.1% would get ~28%, then the next... and by the time you got down to the last of the initial shareholders trying to liquidate, the price would likely be pennies on the dollar.
This is not value, but hot air.
The value these things represent is based almost entirely on the myth/hype + 401k index fund growth + inflation expectations at this point.
If you don't get dividends or voting power from your shares, all you have left is liquidation rights in the event of a bankruptcy. So, the shares are really worth their share of ~whatever AMD's assets are worth in a bankruptcy.
But, because we trade them in public markets, they're immediately worth whatever someone else will pay. And that's basically much more tethered to myth (and consistent 401k index fund growth + inflation expectations) than to fundamentals at this point.
Right now, 401k funds are buying AMD at this higher share price, with zero due diligence!
So, if growth stops / if job losses explode, then 401k contributions slow down (or reverse!), then markets fall, then margin calls happen, then markets fall more, more job losses...
There's a lot of cash sloshing around in the system from all the 2020s money printing, and there's a perma-buy-the-dip mentality that has come out of this extreme bull market (bubble market), so there is quite some extra resilience; but, the coyote is really going to have a reckoning once it finally looks down...
The shares haven't been issued yet, so there isn't any dilution. Equity holders could sell their holdings now and benefit, because when OAI exercises the option and gets 160m shares for peanuts, they will sell those shares ASAP to bring in cash to pay for their orders of AMD chips.
It is folly to take these statements at their words.
Bezos is just saying shit to generate hype. All these executives are just saying shit. There is no plan. You must treat these people as morons who understand nothing.
Anyone who knows even the slightest details about datacenter design knows what moving heat is the biggest problem. This is the exact thing that being in space makes infinitely harder. "Datacenters in space" is an idea you come up with only if you are a moron who knows nothing about either datacenters or space.
If nothing else this is the singular reason you should treat AI as a bubble. All of the people at the helm of it have not a single fucking clue what they're talking about. They all keep running their mouth with utter nonsense like this.
Billionaires are often being regarded as having extremely insightful ideas, in practice their fortunes was often built on a mix of luck, grits and competence in a few narrow fields, but their insights out of their domains tend to be average or worse.
Being too rich means you end up surrounded by sycophants.
Given they're referencing Icarus, they seem to agree with you.
Past bubbles leaving behind something of value is indeed no guarantee the current bubble will do so. For as many times as people post "but dotcom produced Amazon" to HN, people had posted that exact argument about the Blockchain, the NFT, or the "Metaverse" bubbles.
> Now, go away, or I shall taunt you a second time-a!
That is the implication. The point of the first fine isn't to actually hurt Meta. It's to signal that there will be consequences, that the excuse of "but we thought it was legal" is gone now and give them one final chance to get their act together.
It's to pre-emptively clear away any possibility for Meta to appeal to either higher courts or the court of public opinion that they're being treated unfairly. Which they would do if you immediately hit them with a say, €5 billion fine.
Respectfully, the idea of sticking ads in LLMs is just copium. It's never going to work.
LLMs' unfixable inclination for hallucinations makes this an infinite lawsuit machine. Either the regulators will tear OpenAI to shreds over it, or the advertisers seeing their trademarks hijacked by scammers will do it in their stead. LLMs just cannot be controlled enough for this idea to make sense, even with RAG.
And if we step away from the idea of putting ads in the LLM response, we're left with "stick a banner ad on chatgpt dot com". The exact same scheme as the Dotcom Bubble. Worked real well that time, I hear. "Stick a banner ad on it" was a shit idea in 2000. It's not going to bail out AI in 2025.
The original content that LLMs paraphrase is itself struggling to support itself on ads. The idea that you can steal all those impressions through a service that is orders and orders of magnitude more expensive and somehow turn a profit on those very same ads is ludicrous.