Anecdotally, a Max subscriber gets something like $100 worth of usage per day. The more people use Claude Code, the more Anthropic loses, so it sounds like a classical "selling a dollar for 85 cents" business to me.
As soon as users are confronted with their true API cost, the appearance of this being a good business falls apart. At the end of the day, there is no moat around large language models - OpenAI, Anthropic, Google, DeepSeek, Alibaba, Moonshot... any company can make a SOTA model if they wish, so in the long run it's guaranteed to be a race to the bottom where nobody can turn a profit.
> Anecdotally, a Max subscriber gets something like $100 worth of usage per day.
Where are you getting that number from?
Anthropic added quite strict limits on usage - visible from the /usage method inside Claude Code. I would be surprised if those limits turn out to still result in expensive losses for them.
This is just personal experience + reddit anecdotes. I've been using CC from day one (when API pricing was the only way to pay for CC), then I've been on the $20 Pro plan and am getting a solid $5+ worth of usage in each 5h session, times 5-10 sessions per week (so an overall 5-10x subsidy over one month.) And I extrapolated that $200 subscribers must be getting roughly 10x Pro's usage. I do feel the actual limit fluctuates each week as Claude Code engage in this new subsidy war with OAI Codex though.
My theory is this:
- we know from benchmarks that open-weight models like Deepseek R1 and Kimi K2's capabilities are not far behind SOTA GPT/Claude
- open-weight API pricing (e.g. on openrouter) is roughly 1/10~1/5 that of GPT/Claude
- users can more or less choose to hook their agent CLI/IDEs to either closed or open models
If these points are true, then the only reason people are primarily on CC & Codex plans is because they are subsidized by at least 5~10x. When confronted with true costs, users will quickly switch to the lowest inference cost vendor, and we get perfect competition + zero margin for all vendors.
The benchmarks lie. Go try coding full-time with R1 vs Codex or GPT-5 (in Codex). The latter is firmly preferred even by those who have no issue with budgeting tokens for their productivity.
As soon as users are confronted with their true API cost, the appearance of this being a good business falls apart. At the end of the day, there is no moat around large language models - OpenAI, Anthropic, Google, DeepSeek, Alibaba, Moonshot... any company can make a SOTA model if they wish, so in the long run it's guaranteed to be a race to the bottom where nobody can turn a profit.