Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's really interesting.

May I ask what kinds of projects, stack and any kind of markdown magic you use?

And any specific workflow? And are there times when you have to step in manually?





Currently three main projects. Two are Rails back-ends and React front-ends, so they are all Ruby, Typescript, Tailwind, etc. The third is more recent, it's an audio plugin built using the JUCE framework, it is all C++. This is the one that has been blowing my mind the most because I am an expert web developer, but the last time I wrote a line of C++ was 20 years ago, and I have zero DSP or math skills. What blows my mind is that it works great, it's thread safe and performant.

In terms of workflow, I have a bunch of custom commands for tasks that I do frequently (e.g. "perform code review"), but I'm very much in the loop all the time. The whole "agent can code for hours at a time" thing is not something I personally believe. It depends on the task how involved I get, however. Sometimes I'm happy to just let it do work and then review afterwards. Other times, I will watch it code and interrupt it if I am unhappy with the direction. So yes, I am constantly stepping in manually. This is what I meant about "mind meld". The agent is not doing the work, I am not doing the work, WE are doing the work.


I maintain a few rails apps and Claude Code has written 95% of the code for the last 4 months. I deploy regularly.

I make my own PRs then have Copilot review them. Sometimes it finds criticisms, and I copy and paste that chunk of critique into Claude Code, and it fixes it.

Treat the LLMs like junior devs that can lookup answers supernaturally fast. You still need to be mindful of their work. Doubtful even. Test, test, test.


Can we see any of this software created by this amazing LLMs?

Why do you need to use Tailwind if the code is generated? Can't there be something more efficient?

Extensive tailwind training data in the models. Sure there's something more efficient but it's just safer to let the model leverage what it was trained on.

Surely there is an order of magnitude more training data on plain CSS than tailwind, right?

In my experience the LLMs work better with frameworks that have more rigid guidance. Something like Tailwind has a body of examples that work together, language to reason about the behavior needed, higher levels of abstraction (potentially), etc. This seems to be helpful.

The LLMs can certainly use raw CSS and it works well, the challenge is when you need consistent framing across many pages with mounting special cases, and the LLMs may make extrapolate small inconsistencies further. If you stick within a rigid framework, the inconsistencies should be less across a larger project (in theory, at least).


Research -> Plan -> Implement

Start by having the agent ask you questions until it has enough information to create a plan.

Use the agent to create the plan.

Follow the plan.

When I started, I had to look at the code pretty frequently. Rather than fix it myself, I spent time thinking about what I could change in my prompts or workflow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: