Hacker Newsnew | past | comments | ask | show | jobs | submit | streetcat1's commentslogin

How do you know that he works at Meta?

Well that's what he claims. Idk if its true or if this person even exists.

Its not the AI that causing it, but the AI race. Companies need to pay for the large capex.


But where are they racing? If AGI happens, capitalism is over. If AGI won't happen, they just wasted massive amount of resource in chasing of a white elephant and these companies are over.


They're racing to get the ceos and investors rich enough to build survival bunkers on their private islands in time for the societal crash they caused


You can’t survive in survival bunkers or islands, and thinking otherwise is a pipe dream. We don’t have a true model of what this might look like, but if there’s extreme instability then wealth doesn’t serve as a safety measure—it will be a target. You need backing by governed armies to protect status and wealth, but in some proto-civilization model, there will just be warring factions with bunker busters and maybe nukes going at each other. They’ll eventually form treaties and merge into city states, repeating the same trend towards nation states and democracy. Just skip the dumb bloodshed in the middle, and settle on Nordic socialism from the get go.


Nowhere they are stuck in the prisoner's dilemma.


They all spend with one purpose - replacing expensive humans, saying other wise does not make sense.

Any other app does not have moat - anyone can do the same app if it basically wrap the LLM.

If anything, LLM just destroy thier current moat, I.e. if everything is getting behind a chat interface, no one would would see ads.


You're touching on the core tension in Meta's strategy. I think you're partially right, but there's more to it.

On "replacing expensive humans" agree that's part of it, but the bigger play is augmenting existing products. Meta's Q3 2025 guidance shows ad revenue still growing 21.6% YoY. They're using AI to make existing ads more effective (better targeting, higher conversion), not replacing the ad model entirely.

On the moat question this is where the infrastructure spending makes sense. You're right that wrapping an LLM has no moat, but owning the infrastructure to train and serve your own models does. Meta has three advantages: (1) 3B+ daily users generating training data competitors can't access, (2) owning 2GW of infrastructure means $0 marginal cost for inference vs paying OpenAI/Anthropic, and (3) AI embedded in Instagram/WhatsApp/Facebook is stickier than standalone chat.

On ads behind chat interface this is the real risk. But Meta's bet seems to be: short-term AI improves existing ad products (already working), mid-term AI creates new surfaces for ads (AI-generated content, business tools), and long-term if chat wins, Meta wants to own the chat interface (Meta AI), not lose to ChatGPT.

The $75B question is whether they're building a moat or just burning cash on commodity infrastructure. Time will tell, but the data advantage plus vertical integration gives them a shot.

What's your take do you think the data moat is real, or can competitors train equally good models on synthetic/public data?


Closed the labor arbitrage.


You are not taking into account section 174, It takes you 15 years to depreciate foreign salary vs first year (post the BBB).


You forgot the tax advantage and the protection against inflation (if you have to fix rate mortatge).


The tax advantage is only available for people that itemize. Only about 10% of taxpayers do so. Inflation protection is very real and important though.


I think he covered these by his mention of "spreadsheets". A good spreadsheet for rent-vs-buy would include the values in the sliders seen on https://www.nytimes.com/interactive/2024/upshot/buy-rent-cal...


Exactly. The tax advantages are tangential to the behavioral factors I mentioned.


The main benefit of YC startups is not that they have 10x engineers but rather that they are starting from scratch; hence, the AI works well in a greenfield project. Enterprise customers are a totally different ball game. Ninety percent is legacy critical code, and some decisions are not made to optimize the dev time.

Also, for Y Combinator startup, let say that the AI introduces a bug. Since there are no customers, no one cares. Now imagine the same AI introduces a bug in the flight reservation system that was written 20 years ago.


I agree it's not about the 10x engineers or the greenfield. I think YC's selection process is still focused on finding distinguished individuals, but within two specific constraints.


Dont worry.


The competition for big LLM AI companies is not other big LLM AI companies, but rather small LLM AI companies with good enough models. This is a classic innovator dilemma. For example, I can imagine a team of cardiologists creating a fine tune LLM model.


What on earth would cardiologists use a Large Language Model for, except drafting fluff for journals?

Safely and effectively, that is. Dangerous and inappropriate is obviously a much wider set of possibilities.


Like some kind of linter?

The cardiologist checks the ECG, compare with the LLM results and checks the difference. If it can reduce error rate by like 10%, that's already really good.

My current stance on LLM is that it's good for stuff which is painful to generate, but easy to check (for you). It's easier/faster to read an email than to write it. If you're a domain expert, you can check the output, and so on. The danger is in using it for stuff you cannot easily check, or trusting it implicitly because it is usually working.


> trusting it implicitly because it is usually working

I think this danger is understated. Humans are really prone to developing expectations based on past observations and then not thinking very critically about or paying attention to those things once those expectations are established. This is why "self driving" cars that work most of the time but demand that the driver remain attentive and prepared to take over are such a bad idea.


> The cardiologist checks the ECG, compare with the LLM results and checks the difference.

Perhaps you're confusing the acronym LLM (Large Language Model) with ML (Machine Learning)?

Analyzing electrocardiogram waveform data using a text-predictor LLM doesn't make sense: No matter how much someone invests in tweaking it to give semi-plausible results part of the time, it's fundamentally the wrong tool/algorithm for the job.


Give them non boring work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: