Hacker Newsnew | past | comments | ask | show | jobs | submit | mediaman's commentslogin

People from the West always complain when people from the developing world are hired to do work for people in the west.

They got angry about China, the Philippines, India, Kenya.

Oddly, it’s never the people in those countries complaining that they got a better paying job!

Only rich people who think, apparently, that this new middle class ought to be kicked back to the farm fields.


They should be paid American wages

If there are low switching costs, and if there are multiple highly capable models, and if the hardware is openly purchasable (all of these are true), then the price will converge to a reasonable cash flow return on GPUs deployed net of operating expenses of running these data centers.

If they start showing much higher returns on assets, then one of the many infra providers just builds a data center, fills it with GPUs, and rents it out at 5% lower price. This is the market mechanism.

Looking at who owns the compute is barking up the wrong tree, because it has little moat. Maybe GPU manufacturers would be a better place to look, but then the argument is that you're beholden to NVIDIA's pricing to the hyperscalers. There's some truth to that, but you already see that market position eroding because of TPUs and belatedly AMD. All of these giant companies are looking to degrade Jensen's moat, and they're starting to succeed.

Is the argument here that somehow all the hyperscalers are going to merge to one and there will be only one supplier of compute? How do you defend the idea that nobody else could get compute?


The starting point was that competition would prevent AI providers from doubling the price of tokens, because there's lots of models running on lots of providers.

This is in the context of the article, that paints a world where it would be unreasonable not to spend $250k per head per year in tokens.

My argument is the current situation is temporary, and _if_ LLMs provide that much value, then the market will consolidate into a handful of providers, that'll be mostly free to dictate their prices.

> If they start showing much higher returns on assets, then one of the many infra providers just builds a data center, fills it with GPUs, and rents it out at 5% lower price. This is the market mechanism.

Except when the GPUs, memory, and power are in short supply. The demand is higher than the supply, prices go up, and whoever has the deeper pockets, usually the bigger and more established party, wins.


The comment was with reference to inference, not total P&L.

Of course they are losing money in total. They are not, however, losing money per marginal token.

It’s trivial to see this by looking at the market clearing price of advanced open source models and comparing to the inference prices charged by OpenAI.


Seems trivial to create an infinite number of inconsequentially (but hash defeating) different variants.

Africa, Europe, America, Mars. I wonder if there is something about one of these that makes them unlike the others.

Actually, why not colonize Venus instead? Sure, it will be hard, at first, with all the sulphuric acid and intense heat and whatnot, but we colonized America, so why not Venus?


What does it mean, that open world winning was a mistake? That the market is wrong, and peoples' preferences were incorrect, and they should prefer small handcrafted environments instead of what they seem to actually buy?

How? They are all losing tens of billions of dollars on this, so far.

Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

There doesn't appear to be any moat.

This criticism seems very valid against advertising and social media, where strong network effects make dominant players ultra-wealthy and act like a tax, but the AI business looks terrible, and it appears that most benefits are going to accrue fairly broadly across the economy, not to a few tech titans.

NVIDIA is the one exception to that, since there is a big moat on their business, but not clear how long that will last either.


I'm not so sure thats correct. The Labs seem to offer the best overall products in addition to the best models. And requirements for models are only going to get more complex and stringent going forward. So yes, open source will be able to keep up from a pure performance standpoint, but you can imagine a future state where only licensed models are able to be used in commercial settings and licensing will require compliance against limiting subversive use or similar (e.g. sexualization of minors, doesn't let you make a bomb etc.).

When the market shifts to a more compliance-relevant world, I think the Labs will have a monopoly on all of the research, ops, and production know-how required to deliver. That's not even considering if Agents truly take off (which will then place a premium on the servicing of those agents and agent environments rather than just the deployment).

There's a lot of assumptions in the above, and the timelines certainly vary, so its far from a sure thing - but the upside definitely seems there to me.


If that's the case, the winner will likely be cloud providers (AWS, GCP, Azure) who do compliance and enterprise very well.

If Open Source can keep up from a pure performance standpoint, any one of these cloud providers should be able to provide it as a managed service and make money that way.

Then OpenAI, Anthropic, etc end up becoming product companies. The winner is who has the most addictive AI product, not who has the most advanced model.


What’s the purpose of licensing requiring though things though if someone could just use an open source model to do that anyway? If someone were going to do those things you mentioned why do it through some commercial enterprise tool? I can see maybe licensing requiring a certain level of hardening to prevent prompt injections, but ultimately it still really comes down to how much power you give the model in whatever context it’s operating in.

Nvda is not the only exception. Private big names are losing money but there are so many public companies seeing the time of their life. Power, materials, dram, storage to name a few. The demand is truly high.

What we can argue about is if AI is truly transforming lives of everyone, the answer is a no. There is a massive exaggeration of benefits. The value is not ZERO. It’s not 100. It’s somewhere in between.


The opportunity cost of the billions invested in LLMs could lead one to argue that the benefits are negative.

Think of all the scientific experiments we could've had with the hundreds of billions being spent on AI. We need a lot more data on what's happening in space, in the sea, in tiny bits of matter, inside the earth. We need billions of people to learn a lot more things and think hard based on those axioms and the data we could gather exploring what I mention above to discover new ones. I hypothesize that investing there would have more benefit than a bunch of companies buying server farms to predict text.

CERN cost about 6 billions. Total MIT operations cost 4.7 billions a year. We could be allocating capital a lot more efficiently.


I believe that eventually the AI bubble will evolve in a simple scheme to corner the compute market. If no one can afford high-end hardware anymore then the companies who hoarded all the DRAM and GPUs can simply go rent seeking by selling the computer back to us at exorbitant prices.

The demand for memory is going to result in more factories and production. As long as demand is high, there's still money to be made in going wide to the consumer market with thinner margins.

What I predict is that we won't advance in memory technology on the consumer side as quickly. For instance, a huge number of basic consumer use cases would be totally fine on DDR3 for the next decade. Older equipment can produce this; so it has value, and we may see platforms come out with newer designs on older fabs.

Chiplets are a huge sign of growth in that direction - you end up with multiple components fabbed on different processes coming together inside one processor. That lets older equipment still have a long life and gives the final SoC assembler the ability to select from a wide range of components.

https://www.openchipletatlas.org/


That makes no sense. If the bubble bursts, there will be a huge oversupply and the prices will fall. Unless all Micron, Samsung, Nvidia, AMD, etc all go bankrupt overnight, the prices won't go up when demand vanishes.

That assumes the bubble will burst, which it won't if they succesfully corner the high-end compute market.

It doesn't matter if the AI is any good, you will still pay for it because it's the only way to access more compute power than consumer hardware offers.


There is a massive undersupply of compute right now for the current level of AI. The bubble bursting doesn't fix that.

There is a massive over-buying of compute, much beyond what is actually needed for the current level of AI development and products, paid for by investor money. When the bubble pops the investor money will dry up, and the extra demand will vanish. OpenAI buys memory chips to stop competitors from getting them, and Amazon owns datacenters they can't power.

https://www.bloomberg.com/news/articles/2025-11-10/data-cent...


I agree with your point and it is to that point I disagree with GP. These open weight models which have ultimately been constructed from so many thousands of years of humanity are also now freely available to all of humanity. To me that is the real marvel and a true gift.

It's turning out to be a commodity product. Commodity products are a race to the bottom on price. That's how this AI bubble will burst. The investments can't possibly show the ROIs envisioned.

As an LLM I use whatever is free/cheapest. Why pay for ChatGPT if Copilot comes with my office subscription? It does the same thing. If not I use Deepseek or Qwen and get very similar results.

Yes if you're a developer on Claude Code et al I get a point. But that's few people. The mass market is just using chat LLMs and those are nothing but a commodity. It's like jumping from Siri to Alexa to whatever the Google thing is called. There are differences but they're too small to be meaningful for the average user


>losing tens of billions

They are investing 10s of billions.


They are wasting tens of billions on something that has no business value currently, and may well never, just because of FOMO. That's not what I would call an investment.

Many investments may lose money, but the EV here is positive due to the extreme utility that AI can and is bringing.

They are washing 10s of billions of dollars an an industry-wide attempt to keep the music playing

I'd like to see evidence that open models are closing that gap. That would be promising.

>Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

What happens when the AI bubble is over and developers of open models doesn't want to incinerate money anymore? Foundation models aren't like curl or openssl. You can't have maintain it with a few engineer's free time.


If the bubble is over all the built infrastructure would become cheaper to train on? So those open models would incenerate less? Maybe there is an increase of specialist models?

Like after dot-com the leftovers were cheap - for a time - and became valuable (again) later.


No, if the bubble ends the use of all that built infrastructure stops being subsidized by an industry-wide wampum system where money gets "invested" and "spent" by the same two parties.

I feel like that was happening for the fiber-backhaul in 1999. Just different players.

Training is really cheap compared to the basically free inference being handed out by openai Anthropic Google etc.

Spending a million dollars on training and giving the model for free is far cheaper than hundreds of millions of dollars spent on inference every month and charging a few hundred thousand for it.


Not sure I totally follow. I'd love to better understand why companies are open sourcing models at all.

The other side of the market:

I think much of the rot in FAANG is more organizational than about LLMs. They got a lot bigger, headcount-wise, in 2020-2023.

Ultimately I doubt LLMs have much of an impact on code quality either way compared to the increased coordination costs, increased politics, and the increase of new commercial objectives (generating ads and services revenue in new places). None of those things are good for product quality.

That also probably means that LLMs aren't going to make this better, if the problem is organizational and commercial in the first place.


Some are complaining this letter is weak and generic.

Of course it is. You have 3M, Target, General Mills, Cargill, and US Bancorp on here, among others.

If you are looking for some revolutionary call to action, you're looking in the wrong place. And you're misunderstanding what's happening.

It is a really big deal for these very conservative, large, rich companies to be telling the federal government to cut it out, even if it is written in generic legalese.

The letter is not for you. It is for the administration. And it is extremely clear.


I do think they would likely have used more forceful rhetoric if they were dealing with a more normal administration. The current one is atypically spite-driven and prone to retaliate against critics, so they probably figured that saying anything insufficiently conciliatory-sounding would likely be counterproductive.


Even if that is the instinct, this is a mistaken way to deal with narcissistic bullying.

It’s writing the piece in the first place rather than what you put in it that raises the ire. There’s no way to compromise or mollify the wording in a way that makes them give you like, half the punishment.

What’s more, the attempt to mollify signals weakness that just invites them to feel even more vindictive. Being more forthright and decisive is what earns their grudging respect. China understood this, Zohran Mandani understood this. Meanwhile, Europe and Democratic leadership, universities and large law firms refuse to understand this.


> The current one is atypically spite-driven and prone to retaliate against critics

That’s why they do that


It is, in fact, not crazy, because none of this is predicated on using a specific vendor.

Many of these techniques can also work with Chinese LLMs like Qwen served by your inference provider of choice. It's about the harness that they work in, gated by a certain quality bar of LLM.

Taking a discussion about harnesses and stochastic token generators and forcing it into a discussion of American imperialism is making a topic political that is not inherently political, and is exactly the sort of aggressive, cussing tribalistic attitude the article is about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: