Hacker Newsnew | past | comments | ask | show | jobs | submit | simianwords's commentslogin

Domain specific models will never be a thing. You don't get generalised intelligence with that.

https://simianwords.bearblog.dev/why-domain-specific-llms-wo...


Anti AI cope is unreal, the comparisons to smoking won't stop lol. The mental model of such people (like you) will be studied. LLM's won't go anywhere, keep dreaming.

Studied by whom? Your virtual AI concubine who has you under her thumb? I thought human thinking is obsolete, as can be seen by your comments.

Sure. Let’s see whether ai sticks or not. Till then whine about it in the internet. Maybe a few would care

> "We're making great strides in AI" and "We need to cut 20% of people" are simply two statements without any connection aside from the fact that they are next to each other in the sentence.

Huh? How is it not connected? More productivity means fewer people are required. I'm not sure how you are not able to connect these obviously connected statements.


> More productivity means fewer people are required.

Required for what? If your goal is growth, and AI really is improving productivity of every employees that uses it, then why would you fire anyone?


There’s an optimal number of employees required at any productivity point. Why don’t Google hire 3 times the number of developers? They have the money right? What’s your logic for not hiring more?

Because firing is not a zero-sum for hiring.

Hiring 1 developer instead of 3 is not the same cost as firing 2 developers.


why is it not? if google can make more money by hiring 3x the developers, why didn't they do it? just explain that

Hiring and firing people aren't symmetric actions.

They're asymmetric because hiring more people costs more than just the salary. For example, some folks' entire jobs are to recruit and hire people. Once they are hired, you have to onboard them, etc. So the more you hire, the more you have to pay the folks with supporting roles (either directly or by way of them not having infinite time/capacity).

Firing people isn't free, either. It comes at the cost of bad PR and severance, but the latter is voluntary and calculated by the company, and the former is quickly forgotten by anybody that matters to a publicly traded company (investors).

That means not hiring those two people in the first place is usually cheaper than firing them later.

To the original point: Cloudflare isn't hiring fewer people; they are firing people. If they are trying to grow (like every single investor is counting on them to do), then why would they fire people (the cheaper action) now when they would likely need to hire people (the more-expensive action) later in order to meet that increased growth?

The charitable answer would be that the people they are firing were deemed unable to adapt to using AI for all of this supposed increased productivity. But Cloudflare aren't saying that. In fact, they're saying the opposite by stating it's not about individual performance.


your's is a caveat against my larger more correct point: there's an optimal number of employees needed at any given productivity point.

its true that hiring and firing are asymmetrical, and CF has shown that they are willing to bear the brunt of the asymmetry and fire people despite the downsides.

that asymmetry lies doesn't disprove the original point: cloudflare simply doesn't require the _same_ number of people to work for them with AI.

if you disagree with this then you believe that companies should only have monotonically increasing number of employees which is quite ridiculous a claim


this is a low quality comment that doesn't address the simple explanation: more productivity means fewer people are required.

Disagreement != low quality, and that explanation is incredibly naive and simplistic

I would say the GP's phrase: "more productivity means fewer people are required" is perfect summary of my opinion (and post). Sure, you can flesh if out, but that is crux of my argument.

this is such a cope take i don't even know what to say. how can people believe in obviously false things like this? like what's the mental model here?

He didn’t ruin the site. I think you don’t use it much so you don’t know. It’s pretty good now.

It's unusable nowadays if you use the "For You" feed, and even if you stick to people you follow, most of the interesting people have left.

Would taking choline pills help the same way? I think so.

Gemma 4 is really a beast. The 31B version is totally usable like for cases when I'm bored without internet

Does anyone else think "agents" are the wrong abstractions? Agents look like UI wrappers over LLM's - they are inherently not composable. Tailor made agents for UI's don't seem to scale. I predict they wont take off.

What I predict instead is that we will have a common UI layer plugin and a "protocol" than can speak to ui elements -- this might be more composable.


I wonder what's the difference between this and GPT 5.5 thinking with zero thinking effort? Interesting product decision to have different models.

Good question. I find that GPT-5.5 thinking is very good at not thinking for simple questions, so much so that I've never had the need to use the instant model even for quick Q&A.

I'm assuming the instant model, then, is an entirely different smaller model mainly serving the free tier of ChatGPT.


It is an entirely free model, but it is also the model that most users (even paid) interact with until the router pushes the thinking.

Good point. I feel like this does a disservice to ChatGPT -- IIRC even the free tier of Claude points you to Sonnet 4.6 by default, which is magnitudes better than 5.3-instant which has been the default in ChatGPT.

Hence most users will immediately think Claude is smarter, even if their best models are on par.


then again I think the free sonnet 4.6 only allows ~5 requests a day but GPT allows more than 50

Correct. I have the $20/month plan and I just checked, the default is 5.3-instant. I can manually switch it to Thinking is 5.5. I also have it set to auto-switch.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: