Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Or just don't use AI to write code.

Anecdata, but I'm still finding CC to be absolutely outstanding at writing code.

It's regularly writing systems-level code that would take me months to write by hand in hours, with minimal babysitting, basically no "specs" - just giving it coherent sane direction: like to make sure it tests things in several different ways, for several different cases, including performance, comparing directly to similar implementations (and constantly triple-checking that it actually did what you asked after it said "done").

For $200/mo, I can still run 2-3 clients almost 24/7 pumping out features. I rarely clear my session. I haven't noticed quality declines.

Though, I will say, one random day - I'm not sure if it was dumb luck - or if I was in a test group, CC was literally doing 10x the amount of work / speed that it typically does. I guess strange things are bound to happen if you use it enough?

Related anecdata: IME, there has been a MASSIVE decline in the quality of claude.ai (the chatbot interface). It is so different recently. It feels like a wanna-be crapier version of ChatGPT, instead of what it used to be, which was something that tried to be factual and useful rather than conversational and addictive and sycophantic.

 help



My anecdata is that it heavily depends on how much of the relevant code and instructions it can fit in the context window.

A small app, or a task that touches one clear smaller subsection of a larger codebase, or a refactor that applies the same pattern independently to many different spots in a large codebase - the coding agents do extremely well, better than the median engineer I think.

Basically "do something really hard on this one section of code, whose contract of how it intereacts with other code is clear, documented, and respected" is an ideal case for these tools.

As soon as the codebase is large and there are gotchas, edge cases where one area of the code affects the other, or old requirements - things get treacherous. It will forget something was implemented somewhere else and write a duplicate version, it will hallucinate what the API shapes are, it will assume how a data field is used downstream based on its name and write something incorrect.

IMO you can still work around this and move net-faster, especially with good test coverage, but you certainly have to pay attention. Larger codebases also work better when you started them with CC from the beginning, because it's older code is more likely to actually work how it exepects/hallucinates.


> My anecdata is that it heavily depends on how much of the relevant code and instructions it can fit in the context window.

Agreed, but I'm working on something >100k lines of code total (a new language and a runtime).

It helps when you can implement new things as if they're green-field-ish AND THEN implement and plumb them later.


In a well-designed system, you can point an agent at a module of that system and it's perfectly capable of dealing with it. Humans also have a limited context window, and divide and conquer is always how we've dealt with it. The same approach works for agents.

> In a well-designed system, you can point an agent at a module of that system and it's perfectly capable of dealing with it.

Yes, but the problem is that LLMs don't default to well-designed systems... So, you need to aggressively stay on top of them.


How can a person reconcile this comment with the one at the root of this thread? One person says Claude struggles to even meet the strict requirements of a spec sheet, another says Claude is doing a great job and doesn’t even need specific specs?

I have my own anecdata but my comment is more about the dissonance here.


One aspect you have to consider is the differences in human beings doing the evaluation. I had a coworker/report who would hand me obvious garbage tier code with glaring issues even in its output, and it would take multiple iterations to address very specific review comments (once, in frustration, I showed a snippet of their output to my nontechnical mom and even my mom wtf’ed and pointed out the problem unprompted); I’m sure all the AI-generated code I painstakingly spec, review and fix is totally amazing to them and need very little human input. Not saying it must be the case here, that was extreme, but it’s a very likely factor.

This is plausible. Assuming it’s true, we would see the adoption of vibe coding at a faster rate amongst inexperienced developers. I think that’s true.

A counterpoint is Google saying the vast majority of their code is written by AI. The developers at Google are not inexperienced. They build complex critical systems.

But it still feels odd to me, this contradiction. Yes there’s some skill to using AI but that doesn’t feel enough to explain the gap in perception. Your point would really explain it wonderfully well, but it’s contradicted by pronouncements by major companies.

One thing I would add is that code quality is absolutely tanking. PG mentioned YC companies adopted AI generated code at Google levels years ago. Yesterday I was using the software of one such company and it has “Claude code” levels of bugginess. I see it in a bunch of startups. One of the tells is they seem to experience regressions, which is bizarre. I guess that indicates bugs with their AI generated tests.


You don't think Sundar would do that, just go on the Internet and tell lies?

This is magical because you are both on the exact right path and not right. My theory is there’s a sort of skill to teasing code from AI (or maybe not and it’s alchemy all over again) and this is all new enough and we don’t have a common vocabulary for it that it’s hard for one person who is having a good experience and one person who is not to meaningfully sort out what they are doing differently.

Alternatively, it could be there’s a large swath of people out there so stupid they are proud of code your mom can somehow review and suggest improvements in despite being nontechnical.


> This is magical because you are both on the exact right path and not right. My theory is there’s a sort of skill to teasing code from AI (or maybe not and it’s alchemy all over again) and this is all new enough and we don’t have a common vocabulary for it that it’s hard for one person who is having a good experience and one person who is not to meaningfully sort out what they are doing differently.

I don't think this is a hypothesis.

Outside of asking for one-shot tasks that have been done a million times before, LLMs do not "default" to good work.

If you ask them over-and-over again to find holes in their solution, to fix them, to evaluate for tech debt, to test all cases, to re-asses after the cases if it's architecturally coherent, to compare to the closest available known good implementations, etc etc, they can eventually get what you want done unbelievably cheaply to an acceptable level of quality.

I mentioned initially - their work is unbelievably cheap, you should be EAGER to reject it. Most people wouldn't even bend down to pick a penny up off the sidewalk. They can literally pump out CLs for a penny. You shouldn't even waste time looking at "I'm done" until they've gone through 10+ rounds of reviews, refactors, bug fixes, thought of more test cases, compared to known implementations, etc.

Why are you going to spend ~$50-$100+ of your time reviewing $0.01 of LLM time?! It makes no sense!

If you just listen to them say "I'm done" and move on to their next task, it won't take too many days before you're swimming in a sea of incoherent garbage.


I just read Steve Yegge's book Vibe Coding, and he says learning to use AI effectively is a skill of its own, and takes about a year of solid work to get good at it. It will sometimes do a good job and other times make a mess, and he has a lot of tips on how to get good results, but also says a lot of it is just experience and getting a good feel for when it's about to go haywire.

One person is rigorously checking to see if Claude is actually following the spec and one person isn’t?

One is getting paid by a marketing department program and the other isn't. Remember how much has been spent making LLMs and they have now decided that coding is its money maker. I expect any negative comment on LLM coding to be replied to by at least 2 different puppets or bots.

Then you should expect any positive comment to be replied negatively by a competition's puppet or bot too

Not necessarily; rising tide and all that. When a new scam like this emerges, it behooves all of the grifters to cooperate and not muddy the waters with distrust.

I’m normally very skeptical of conspiracy theories. But saw an AI booster bot responding to a negative AI post I made here.

Someone pointed out to me in the comments that the username had posted long replies to 3 completely different threads in the same minute. That and looking back at its post history confirmed it was a bot.


... or one person has a very strong mental model of what he expects to do, but the LLM has other ideas. FWIW I'm very happy with CC and Opus, but I don't treat it as a subordinate but as a peer; I leave it enough room to express what it thinks is best and guide later as needed. This may not work for all cases.

If you don’t have a very strong mental model for what you are working on Claude can very easily guide in you into building the wrong thing.

For example I’m working on a huge data migration right now. The data has to be migrated correctly. If there are any issues I want to fail fast and loud.

Claude hates that philosophy. No matter how many different ways I add my reasons and instructions to stop it to the context, it will constantly push me towards removing crashes and replacing them with “graceful error handling”.

If I didn’t have a strong idea about what I wanted, I would have let it talk me into building the wrong thing.

Claude has no taste and its opinions are mostly those of the most prolific bloggers. Treating Claude like a peer is a terrible idea unless you are very inexperienced. And even then I don’t know if that’s a good idea.


> Claude has no taste and its opinions are mostly those of the most prolific bloggers.

I often think that LLMs are like a reddit that can talk. The more I use them, the more I find this impression to be true - they have encyclopedic knowledge at a superficial level, the approximate judgement and maturity of a teenager, and the short-term memory of a parakeet. If I ask for something, I get the statistical average opinion of a bunch of goons, unconstrained by context or common sense or taste.

That’s amazing and incredible, and probably more knowledgeable than the median person, but would you outsource your thinking to reddit? If not, then why would you do it with an LLM?


> they have encyclopedic knowledge at a superficial level, the approximate judgement and maturity of a teenager, and the short-term memory of a parakeet. If I ask for something, I get the statistical average opinion of a bunch of goons, unconstrained by context or common sense or taste.

Love this paragraph; it's exactly how I feel about the LLMs. Unless you really know what you are doing, they will produce very sub-optimal code, architecturally speaking. I feel like a strong acumen for proper software architecture is one of the main things that defines the most competent engineers, along with naming things properly. LLMs are a long, long way from having architectural taste


Try asking to review your code as if it were Linus Torvalds. No, really.

I’ve tried that. I’ve experimented with a whole council of 13 personas including many famous developers. It’s definitely different. But it’s hasn’t performed significantly better in my tests.

Holding it wrong.

That’s interesting to hear as for me Claude has been quite good about writing code that fails fast and loud and has specifically called it out more than once. It has also called out code that does not fail early in reviews.

If you add a single space to a prompt, you’ll get a completely different output, so it’s no surprise that feeding entirely different programs into the prompt produces radically different output.

My guess is that there must be something about the language(go) or the domain (a data migration tool that uses Kafka) that triggers this.


You're right, data migration is a specific case where you have a very strong set of constraints.

I, on the other hand, am doing a new UI for an existing system, which is exactly where you want more freedom and experimentation. It's great for that!


Have you created a plan where the requisite is not to bother you with x and y, and to use some predetermined approach? What you describe sometimes happens to me, but it happens less when its part of the spec.

Yes. That’s one of the things included in this.

> No matter how many different ways I add my reasons and instructions to stop it to the context


> it will constantly push me towards removing crashes and replacing them with “graceful error handling”.

Is it generating JS code for that?


No this is a kafka consumer written in go.

I think it matters what the project and tech stack is, and how much you try to get done before starting a fresh chat.

I've had interesting chats where it explained that it's choice of tailwind for example was because it had a ton of training knowledge on it.

I've also had it try to build more in one chat than it should many times.

For some reason openai codex handles building too much without failing better - but that is total anecdata from my particular projects and ymmv.

I've had these things try to build big when a little nudge gets them to change direction and not build so much. Explain which libraries and such and asking it to change the tech stack and the steps to build at once seem to make things much better for my use cases.

Also running extra checks and cleanup later is a thing, that sure a human might have seen an obvious thing at time of build, but we have bigger memory context comparatively imho.


I think it depends on both the complexity and the quality bars set by the engineer.

From my observations, generally AI-generated code is average quality.

Even with average quality it can save you a lot of time on some narrowly specialized tasks that would otherwise take you a lot of research and understanding. For example, you can code some deep DSP thingie (say audio) without understanding much what it does and how.

For simpler things like backend or frontend code that doesn't require any special knowledge other than basic backend or frontend - this is where the bars of quality come into play. Some people will be more than happy with AI generated code, others won't be, depending on their experience, also requirements (speed of shipping vs. quality, which almost always resolves to speed) etc.


It could just be that each of the two reviewers is merely focussing on different sides of the same coin? I use Claude all the time. It saves me a lot of effort that I would have otherwise spent in looking up specific components. The magically autocompleted pieces of boilerplate are a tangible relief. It also catches issues that I missed. But when it is wrong, it can be subtly or embarassingly or spectacularly wrong depending on the situation.

Note that one person is mentioning they use Claude Sonnet, which is less capable than the higher tiers (Opus, etc).

It boils down to scope. I use CC in both very specific one-language systems and broad backend-frontend-db-cache systems. You can guess where the difficulty lies. (Hint: its the stuff with at least 3 distinct languages)

> basically no "specs" - just giving it coherent sane direction

This is one variable I almost always see in this discussion: the more strict the rules that you give the LLM, the more likely it is to deeply disappoint you

The earlier in the process you use it (ie: scaffolding) the more mileage you will get out of it

It's about accepting fallability and working with it, rather than trying to polish it away with care


To me this still feels like it would be a net negative. I can scaffold most any project with a language/stack specific CLI command or even just checking out a repo.

And sure, AI could “scaffold” further into controllers and views and maybe even some models, and they probably work ok. It’s then when they don’t, or when I need something tweaked, that the worry becomes “do I really understand what’s going on under the hood? Is the time to understand that worth it? Am I going to run across a small thread that I end up pulling until my 80% done sweater is 95% loose yarn?”

To me the trade-off hasn’t proven worth it yet. Maybe for a personal pet project, and even then I don’t like the idea of letting something else undeterministically touch my system. “But use a VM!” they say, but that’s more overhead than I care for. Just researching the safest way to bootstrap this feels like more effort than value to me.

Lastly, I think that a big part of why I like programming is that I like the act of writing code, understanding how it works, and building something I _know_.


A lot of the benefit of scaffolding is building basic context which you can also build by feeding it the files produced by whatever CLI tool and talk about it forcing it to think for lack of a better word about your design. You can also force feed it design and api documentation. If you think that you have given it too much you are almost certainly wrong.

Doing nonsensical things with a library feed it the documentation still busted make it read the source


But, how do you know the code is good?

If you do spot checks, that is woefully inadequate. I have lost count of the number of times when, poring over code a SOTA LLM has produced, I notice a lot of subtle but major issues (and many glaring ones as well), issues a cursory look is unlikely to pick up on. And if you are spending more time going over the code, how is that a massive speed improvement like you make it seem?

And, what do you even mean by 10x the amount of work? I keep saying anybody that starts to spout these sort of anecdotes absolutely does NOT understand real world production level serious software engineering.

Is the model doing 10x the amount of simplification, refactoring, and code pruning an effective senior level software engineer and architect would do? Is it doing 10x the detailed and agonizing architectural (re)work that a strong developer with honed architectural instincts would do?

And if you tell me it's all about accepting the LLM being in the driver's seat and embracing vibe coding, it absolutely does NOT work for anything exceeding a moderate level of complexity. I used to try that several times. Up to now no model is able to write a simple markdown viewer with certain specific features I have wanted for a long time. I really doubt the stories people tell about creating whole compilers with vide coding.

If all you see is and appreciate that it is pumping out 10x features, 10x more code, you are missing the whole point. In my experience you are actually producing a ton of sh*t, sorry.


> But, how do you know the code is good?

Honestly, this more of a question about scope of the application and the potential threat vectors.

If the GP is creating software that will never leave their machine(s) and is for personal usage only, I'd argue the code quality likely doesn't matter. If it's some enterprise production software that hundreds to millions of users depend on, software that manages sensitive data, etc., then I would argue code quality should asymptotically approach perfection.

However, I have many moons of programming under my belt. I would honestly say that I am not sure what good code even is. Good to who? Good for what? Good how?

I truly believe that most competent developers (however one defines competent) would be utterly appalled at the quality of the human-written code on some of the services they frequently use.

I apply the Herbie Hancock philosophy when defining good code. When once asked what is Jazz music, Herbie responded with, "I can't describe it in words, but I know it when I hear it."


> I apply the Herbie Hancock philosophy when defining good code. When once asked what is Jazz music, Herbie responded with, "I can't describe it in words, but I know it when I hear it."

That’s the problem. If we had an objective measure of good code, we could just use that instead of code reviews, style guides, and all the other things we do to maintain code quality.

> I truly believe that most competent developers (however one defines competent) would be utterly appalled at the quality of the human-written code on some of the services they frequently use.

Not if you have more than a few years of experience.

But what your point is missing is the reason that software keeps working in the fist, or stays in a good enough state that development doesn’t grind to a halt.

There are people working on those code bases who are constantly at war with the crappy code. At every place I’ve worked over my career, there have been people quietly and not so quietly chipping away at the horrors. My concern is that with AI those people will be overwhelmed.

They can use AI too, but in my experience, the tactical tornadoes get more of a speed boost than the people who care about maintainability.


I had a long reply to your comment, then decide it was not truly worth reading. However, I do have one question remaining:

> the tactical tornadoes get more of a speed boost than the people who care about maintainability.

Why are these not the same people? In my job, I am handed a shovel. Whatever grave I dig, I must lay in. Is that not common? Seriously, I am not being factious. I've had the same job for almost a decade.


That’s because you’ve been there a decade. It’s very common for people to skip jobs every 2 years so that they never end up seeing the long term consequences of their actions.

The other common pattern I’ve seen goes something like this.

Product asks Tactical Tornado if they can building something TT says sure it will take 6 weeks. TT doesn’t push back or asks questions, he builds exactly what product asks for in an enormous feature branch.

At the end of 6 weeks he tries to merge it and he gets pushback from one or more of the maintainability people.

Then he tells management that he’s being blocked. The feature is already done and it works. Also the concerns other engineers have can’t be addressed because “those are product requirements”. He’ll revisit it later to improve on it. He never does because he’s onto the next feature.

Here’s the thing. A good engineer would have worked with product to tweak the feature up front so that it’s maintainable, performant etc…

This guy uses product requirements (many that aren’t actually requirements) and deadlines to shove his slop through.

At some companies management will catch on and he’ll get pushed out. At other companies he’ll be praised as a high performer for years.


Way better than the random India dev output. I seriously don't know what everyone around here is doing. All I see are complaints while I produce the output of ten devs. Clean code, solid design.

Spend a few hours writing context files. Spend the rest of the week sipping bourbon.


So what have you released?

10x means you could have built something that would have taken 4 or 5 years in the time you've had since Opus 4.5 came out.

Where's your operating system, game engine, new programming language, or complex SaaS app?


> I can still run 2-3 clients almost 24/7 pumping out features.

Honest question. How does one do that? My workflow is to create one git worktree per feature and start one session per worktree. And then I spent two hours in a worktree talking to Opus and reviewing what it is doing.


Curious to know what you are using $200/mo CC for? New applications? Business? What kind of application needs to run 2-3 clients run 24/7? How are you coming up with features. Just trying to understand the workflow.

> It's regularly writing systems-level code that would take me months to write by hand in hours, with minimal babysitting

Has your output kept pace with the code? Because months in hours means, even pushing those ratios quite far, to be years in days.

Has your roadmap accelerated multiple years in the last few months in terms of verifiable results?


months you say? how incredible. it beggars belief in fact

Not sure about ChatGPT, but Claude was (is still?) an absolute ripper at cracking some software if one has even a little bit of experience/low level knowledge. At least, that's what my friend told me... I would personally never ever violate any software ToA.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: