Hacker Newsnew | past | comments | ask | show | jobs | submit | fielding's commentslogin

I designed the system, wrote the spec, validated the output, and ran it through a test framework I'm building that generates constraints in isolation. Then checks the implementation against those constraints in a feedback loop until they all are met/pass. But yes, claude wrote the code.

I'm comfortable calling that building something. If you're not, that's fine, but the distinction between 'prompted an AI' and 'designed and validated a system using AI tooling' is important.


My opinion is I think that there is a massive gulf between 'wrote the spec' and 'validated the output'.

I think if the answer to "could I do this again without claude" is no then it is difficult to claim ownership.

If you're just adding endpoints to some web project and doing feature work then whatever, if you are "rewriting tree sitter in rust" which a lot of these posts seem to be I think it deserves some skepticism.


Nit was actually one of the first projects built via a framework I'm building (specter) that generates code and test constraints in parallel isolation (to prevent gaming the tests/constraints), then uses the constraints as a feedback loop against the generated code.

The agent writes the code, I designed the system, wrote the spec, and validated the output. Perhaps not the way we've built things in the past, but didn't feel all that different to me other than having more time to work on other things while it was running the feedback loop on the implementation


Yeah, Claude is a co-author on the commits. On purpose. You can turn that off in one line, I left it on because I'm not trying to hide it. I do have a day job that takes up the majority of my time, so yes, I absolutely use claude to build side projects.

I think they were pointing out that replacing em-dashes sort of looks like you want to hide it.

Is it though? The person who commissions a painting doesn't design the composition, validate every brushstroke, and run the output through an automated test suite. The analogy breaks down pretty fast.

> The person who commissions a painting doesn't design the composition

They often do! Of course the artist has creative liberty to make it work, similar to how LLM's will deviate from the spec.

Was your automated test suite also AI generated?

You probably could have avoided all criticism by simply writing the article yourself instead of publishing raw LLM output. If someone isn't willing to write about a project they made, it's usually an indicator that they spent an equal amount of effort on the code as well.

And why did you make a commit to remove em dashes? That seems odd.


Not saying it is the bottleneck. It's bloat. 7.4% of all shell tokens across 3,156 sessions is a lot of unnecessary context. It won't make or break a session, but it adds up across thousands of calls.

The tokens still land in the context window either way. Prompt caching gives you a discount on repeated input, but only for stable prefixes like system prompts. Git output changes every call, so it's always uncached, always full price. Nit reduces what goes into the window in the first place.

I was thinking more if you write a prompt into an IDE that has first-party integration with an LLM platform (e.g. VS Code with Github Copilot), it would make sense on their end to reduce and remove redundant input before ingesting the token into their models, just to increase throughput (increase customers) and decrease latency (reduce costs). They would be foolish not to do this kind of optimisation, so surely they must be doing it. Whether they would pass on those token savings to the user, I couldn't say.

this is awesome! thanks for sharing rtk.. going to check it out.

correct

It goes beyond what I was able to do with git settings alone. Specifically stripping the headers/padding/decorative and it doees it across all output (or well a lot of it).

nit's defaults go beyond what --short does. The token savings come from stripping headers, padding, instructional text, etc. Headers and decorative text ends up tokenizing poorly, so it helps quite a bit there.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: