Google's AI overview regularly hallucinates or gets the answer wrong. Obviously if you're going to run inference on every search from billions of users, it has to be a very cheap model.
You can already do what you're looking for by reading the browser cache as new data is cached. This would allow you to see the site as it was loaded originally, instead of simply fetching an updated view from a URL. The data layout for the cache in Firefox and Chrome is available online.
The unfortunate reality is that, depending on your personal preferences, "most modern games" require such a ring 0 anti-cheat. Any game that has a matchmaking mode with a competitive option requires a rootkit.
As an aside, I recently found Riot Games' Vanguard installed on my Linux ESP partition... after having installed the game on my windows partition. It rooted every OS it could find mounted. Incredible.
I've found the latency of /compact makes it unusable. Perhaps this is just the result of my waiting until I have 0% context remaining.
Fun fact, a large chunk of context is reserved for compaction. When you are shown that you have "0% context remaining," it's actually like 30% remaining that's reserved for compaction.
And yet, for some reason I feel like 50% of the time, compaction fails because it runs out of context or hits (non-rate) API limits.
Weirdly, I’ve found that when that happens I can close Claude and then run `claude --continue` and now it has room to compact. Makes no sense.
But I have no idea what state it will be in after compact, so it’s better to ask it to write a complete and thorough report including what source files to read. Lot more work but better than going off the rails.
I wonder how well a sentence or two in CLAUDE.md, saying to search the local project for examples of similar use cases or use of internal libraries, would work.
It seems like their whole platform depends on it though… to my read they’re providing their users with cloud devcontainers to connect to from their local VS Code, then deploying to production by snapshotting the container with docker commit. Those containers have SSH enabled to the internet, which is where all of the auth logs came from that wound up baked into the images.