Hacker Newsnew | past | comments | ask | show | jobs | submit | stitched2gethr's commentslogin


Try proxymock. It's not open source but it is free to use.

It's not ready to make important decisions. But that's not the same as making important contributions.

We're way past that.

Insane. I'm terrified and I can't wait.


We did the same and I wrote (admittedly had AI write) about it.

https://speedscale.com/blog/building-speedy-autonomous-ai-de...


Thanks for sharing. Can you share an estimate of how many tokens it uses over time? Would love to know how much it costs in terms of money.


It all depends on the model and how much you use it of course. We're running Opus 4.6 and on a light day it spends a dollar or two. This is just a few simple operations like "create a ticket for ..." and it's regular heartbeat checks. The heaviest day I see is $110 and on that day we were basically talking to it and having it implement features all day long.


I had to scroll too far to find this take. 100%.

This is like saying the CLAUDE.md or AGENTS.md is irrelevant because the LLM generated it.


It's all about how full the context is, right? For a task that can be completed in 20% of the context it doesn't matter, but you don't want to fill your context with exploration before you do the hard part.

I have actually found something close to the opposite. I work on a large codebase and I often use the LLM to generate artifacts before performing the task (for complex tasks). I use a prompt to say "go explore this area if the code and write about it". It documents concepts and has pointers to specific code. Then a fresh session can use that without reading the stuff that doesn't matter. It uses more tokens overall, but includes important details that can get totally missed when you just let it go.


> It's all about how full the context is, right?

No, even when you restart the context from scratch, which I do for each change really, seeing that same effect.


I think that's partly the point. This is the tool that everyone wanted but couldn't quite describe. Not saying he's a genius, but he was the first to will it into existence.


Try running `/insights` with Claude Code.


There is no such command, according to the docs [0]. /s

I continue to find it painfully ironic that the Claude Code team is unable to leverage their deep expertise and unlimited token budget to keep the docs even close to up-to-date automatically. Either that or they have decided accurate docs aren't important.

[0] https://code.claude.com/docs/en/interactive-mode#built-in-co...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: