Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Like most of us it appears LLMs really only want to work on greenfield projects.


Good joke, but the reality is they falter even more on truly greenfield projects.

See: https://news.ycombinator.com/item?id=42134602


That is because, by definition, their models are based upon the past. And woe unto thee if that training data was not pristine. Error propagation is a feature; it's a part of the design, unless one is suuuuper careful. As some have said, "Fools rush in."



I agree with this. But the reason is that AI does better the more constrained it is, and existing codebases come with constraints.

That said, if you are using Gen AI without a advanced rag system feeding it lots of constraints and patterns/templates I wish you luck.


The site also suggests LLMs care a great deal one way or another.

"Unlock a codebase that your engineers and AI love."

https://www.gauge.sh/

I think they do often act opinionated and show some decision-making ability, so AI alignment really is important.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: