Hacker Newsnew | past | comments | ask | show | jobs | submit | dev_hugepages's commentslogin

No, the cache is a few GB large for most usual context sizes. It depends on model architecture, but if you take Gemma 4 31B at 256K context length, it takes 11.6GB of cache

note: I picked the values from a blog and they may be innacurate, but in pretty much all model the KV cache is very large, it's probably even larger in Claude.


To extend your point: it's not really the storage costs of the size of the cache that's the issue (server-side SSD storage of a few GB isn't expensive), it's the fact that all that data must be moved quickly onto a GPU in a system in which the main constraint is precisely GPU memory bandwidth. That is ultimately the main cost of the cache. If the only cost was keeping a few 10s of GB sitting around on their servers, Anthropic wouldn't need to charge nearly as much as they do for it.

That cost that you're talking about doesn't change based on how long the session is idle. No matter what happens they're storing that state and bring it back at some point, the only difference is how long it's stored out of GPU between requests.

Are you sure about that? They charge $6.25 / MTok for 5m TTL cache writes and $10 / MTok for 1hr TTL writes for Opus. Unless you believe Anthropic is dramatically inflating the price of the 1hr TTL, that implies that there is some meaningful cost for longer caches and the numbers are such that it's not just the cost of SSD storage or something. Obviously the details are secret but if I was to guess, I'd say the 5m cache is stored closer to the GPU or even on a GPU, whereas the 1hr cache is further away and costs more to move onto the GPU. Or some other plausible story - you can invent your own!

Storing on GPU would be the absolute dumbest thing they could do. Locking up the GPU memory for a full hour while waiting for someone else to make a request would result in essentially no GPU memory being available pretty rapidly. This type of caching is available from the cloud providers as well, and it isn't tied to a single session or GPU.

> Storing on GPU would be the absolute dumbest thing they could do

No. It’s not dumb. There will be multiple cache tiers in use, with the fastest and most expensive being on-GPU VRAM with cache-aware routing to specific GPUs and then progressive eviction to CPU ram and perhaps SSD after that. That is how vLLM works as you can see if you look it up, and you can find plenty of information on the multiple tiers approach from inference providers e.g. the new Inference Engineering book by Philip Kiely.

You are likely correct that the 1hr cached data probably mostly doesn’t live on GPU (although it will depend on capacity, they will keep it there as long as they can and then evict with an LRU policy). But I already said that in my last post.


Yesterday I was playing around with Gemma4 26B A4B with a 3 bit quant and sizing it for my 16GB 9070XT:

  Total VRAM: 16GB
  Model: ~12GB
  128k context size: ~3.9GB
At least I'm pretty sure I landed on 128k... might have been 64k. Regardless, you can see the massive weight (ha) of the meager context size (at least compared to frontier models).

On HN, I often see comments like this, complaining about Cloudflare blocking access to pages. It makes me wonder if it’s due to a particular setup that triggers bot detection – like Tor or no-JS – that HN readers often use, or if Cloudflare has too many false positives.

I think it's aggressive user profiling, so anyone with a hint of privacy is not welcomed. I can't imagine this getting any better with Chrome MCP and other tools.

Non-Chrome browsers constantly require Robot check

I don't have that _particular_ problem, but I often gripe about how no website seems to be able to remember that I've used this device before ...

... and only briefly pause to wonder if it's because of all the anti-cookie, anti-tracking stuff in Safari.


Time to gather a dataset and train your own model!


Yeah the habit of discarding typography and polish as a "proof of humanity" is worrying to say the least


I’m more hopeful that MIDI completion/in-filling models will be easier for musicians to control and use. But right now, the most popular tools are things like Suno, where you barely have any control and it spits out an entire, possibly mediocre song. It’s the same vein as ChatGPT image generation vs. Stable Diffusion, where you can do much more controllable inpaints with the latter.


Nicotine itself is carcinogenic in the mouth:

> Nicotine in tobacco can form carcinogenic tobacco-specific nitrosamines through a nitrosation reaction. This occurs mostly in the curing and processing of tobacco. However, nicotine in the mouth and stomach can react to form N-nitrosonornicotine, a known type 1 carcinogen, suggesting that consumption of non-tobacco forms of nicotine may still play a role in carcinogenesis


The dose in urine is 1-3% of that of cigarette smokers so it is a significant, order of magnitude decrease in risk based on the paper another GP has posted below. In the mouth the levels also seem to be an order of magnitude lower than cigarette smokers (though similar in a majority of cases). Those are relatively acceptable risks for a vice I would think.


They're not very skilled


I kind of understand where they come from: science vulgarization in pop news has been riddled with misinterpretation or lack of depth which can mislead the general public.


Can't that be communicated without calling anyone a know-nothing hack?


I’m not gonna delete it as it’s just going to make comments like yours confusing for people, but that was poor phrasing from me.

It gave the impression that this specific journalist knows nothing, which is unfair.

I was trying to be funny (always risky online) and intended to be speaking humorously about science journalism in generally. In hindsight, my phrasing doesn’t do that, and actually doesn’t communicate what I was saying very well.

I stand by my criticism of science journalism in general and my request that the article is just posted. But my wording was very rough, ultimately didn’t make the point I intended and yes might frustrate some people. If someone is extremely upset or hurt by my comment then, I think, at some stage that isn’t my fault and the Internet might not be right for that person.


Oof, this comment was really nice up until the end. Accepting responsibility, expressing regret, etc.

> If someone is extremely upset or hurt by my comment then, I think, at some stage that isn’t my fault and the Internet might not be right for that person.

But then you're like "If you're upset, whatever, that's on you" - even though nobody's really suggested someone is "extremely" upset or hurt by your comment.

Also, you can be funny on the Internet - it has nothing to do with that. The real question is whether you can be funny without degrading people.


I’m just saying that I draw the line somewhere with how upset someone is. Like, if someone read my comment and thought it was unfair then I agree with them. If someone read it and was deeply hurt by it - that’s really in their court not mine.


God fucking damn it send me a PDF with a QR code like a reasonable human being


You may have untreated myopia, or need to use a bigger font size (HN is guilty of this!)


No I just sometimes use my phone while lying down lol


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: