Hacker Newsnew | past | comments | ask | show | jobs | submit | spullara's commentslogin

this is one of the best things about using claude over gpt. claude understands the bigger assignment and does all the work and sometimes more than necessary but for me it beats the alternative.

claude making a statement that sounds impressive but it is actually the first codebase it has ever analyzed.

"This is genuinely one of the most memory-safe codebases I've analyzed."


It is definitely not the first codebase an extensively RL-trained Claude has ever analyzed. How do you think it got so good?

Meaning it has no episodic memory of any of those analyses that it has done.

You didn't say anything about 'episodic' and that's irrelevant to the point even if its long-term memory from training didn't count.

Maybe we do not know what Claude has been doing and he keeps it secret...? :D

overall construction in the US had a measured death rate of 1 in 1000 people in 2023. i think we can accept far higher rate for space travel.

interesting! foundationdb was created after the team was going to build a massively multiplayer game and couldn't find a database that could support it...


shoot i show that to people every once in a while


Here is the official one from NASA:

https://www.nasa.gov/missions/artemis-ii/arow/

but I admit that it isn't what I would really want.


nothing about this proves anything except that someone or something had access to the key.


Do you think it is likely that the majority of the people that spent decades building this trust graph and gaining the trust needed to be release engineers on the packages that power the whole internet are just going to hand off control of that key to a bot?

Anyone doing so would be setting their professional reputations completely on fire, and burning your in-person-built web of trust is a once in a lifetime thing.

Basically, we trust the keys belong to humans and are controlled by humans because to do otherwise would be a violation of the universally understood trust contract and would thus be reputational bankruptcy that would take years to overcome, if ever.

Even so, we assume at least one maintainer is dishonest at all times, which is why every action needs signatures from two or more maintainers.


percentage wise they have more billionaires than frauds on the list, at least so far.


So you want them to keep working on those products?


check out rtk that does this for a bunch of commands


Do the larger LLM platforms just do this for you? Or perhaps they do this behinds the scenes, and charge you for the same amount of tokens?


The tokens still land in the context window either way. Prompt caching gives you a discount on repeated input, but only for stable prefixes like system prompts. Git output changes every call, so it's always uncached, always full price. Nit reduces what goes into the window in the first place.


I was thinking more if you write a prompt into an IDE that has first-party integration with an LLM platform (e.g. VS Code with Github Copilot), it would make sense on their end to reduce and remove redundant input before ingesting the token into their models, just to increase throughput (increase customers) and decrease latency (reduce costs). They would be foolish not to do this kind of optimisation, so surely they must be doing it. Whether they would pass on those token savings to the user, I couldn't say.


no because tool calls are all client side generally. unless you mean using a remote environment where Claude Code is running separately but usually those aren't being charged by the token.


this is awesome! thanks for sharing rtk.. going to check it out.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: