Anti AI cope is unreal, the comparisons to smoking won't stop lol. The mental model of such people (like you) will be studied. LLM's won't go anywhere, keep dreaming.
> "We're making great strides in AI" and "We need to cut 20% of people" are simply two statements without any connection aside from the fact that they are next to each other in the sentence.
Huh? How is it not connected? More productivity means fewer people are required. I'm not sure how you are not able to connect these obviously connected statements.
There’s an optimal number of employees required at any productivity point.
Why don’t Google hire 3 times the number of developers? They have the money right? What’s your logic for not hiring more?
Hiring and firing people aren't symmetric actions.
They're asymmetric because hiring more people costs more than just the salary. For example, some folks' entire jobs are to recruit and hire people. Once they are hired, you have to onboard them, etc. So the more you hire, the more you have to pay the folks with supporting roles (either directly or by way of them not having infinite time/capacity).
Firing people isn't free, either. It comes at the cost of bad PR and severance, but the latter is voluntary and calculated by the company, and the former is quickly forgotten by anybody that matters to a publicly traded company (investors).
That means not hiring those two people in the first place is usually cheaper than firing them later.
To the original point: Cloudflare isn't hiring fewer people; they are firing people. If they are trying to grow (like every single investor is counting on them to do), then why would they fire people (the cheaper action) now when they would likely need to hire people (the more-expensive action) later in order to meet that increased growth?
The charitable answer would be that the people they are firing were deemed unable to adapt to using AI for all of this supposed increased productivity. But Cloudflare aren't saying that. In fact, they're saying the opposite by stating it's not about individual performance.
your's is a caveat against my larger more correct point: there's an optimal number of employees needed at any given productivity point.
its true that hiring and firing are asymmetrical, and CF has shown that they are willing to bear the brunt of the asymmetry and fire people despite the downsides.
that asymmetry lies doesn't disprove the original point: cloudflare simply doesn't require the _same_ number of people to work for them with AI.
if you disagree with this then you believe that companies should only have monotonically increasing number of employees which is quite ridiculous a claim
I would say the GP's phrase: "more productivity means fewer people are required" is perfect summary of my opinion (and post). Sure, you can flesh if out, but that is crux of my argument.
Does anyone else think "agents" are the wrong abstractions? Agents look like UI wrappers over LLM's - they are inherently not composable. Tailor made agents for UI's don't seem to scale. I predict they wont take off.
What I predict instead is that we will have a common UI layer plugin and a "protocol" than can speak to ui elements -- this might be more composable.
Good question. I find that GPT-5.5 thinking is very good at not thinking for simple questions, so much so that I've never had the need to use the instant model even for quick Q&A.
I'm assuming the instant model, then, is an entirely different smaller model mainly serving the free tier of ChatGPT.
Good point. I feel like this does a disservice to ChatGPT -- IIRC even the free tier of Claude points you to Sonnet 4.6 by default, which is magnitudes better than 5.3-instant which has been the default in ChatGPT.
Hence most users will immediately think Claude is smarter, even if their best models are on par.
Correct. I have the $20/month plan and I just checked, the default is 5.3-instant. I can manually switch it to Thinking is 5.5. I also have it set to auto-switch.
https://simianwords.bearblog.dev/why-domain-specific-llms-wo...
reply