Hacker Newsnew | past | comments | ask | show | jobs | submit | rootlocus's commentslogin

I feel like the distinction is equivalent to

    LLMs can make mistakes. Humans can't.
Humans can and do make mistakes all the time. LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.

I think think the underlying problem people have is they don't trust themselves to review code written by others as much as they trust themselves to implement the code from scratch. Realistically, a very small subset of developers do actual "engineering" to the level of NASA / aerospace. Most of us just have inflated egos.

I see no problem modelling the problem, defining the components, interfaces, APIs, data structures, algorithms and letting the LLM fill in the implementation and the testing. Well designed interfaces are easy to test anyway and you can tell at a glance if it covered the important cases. It can make mistakes, but so would I. I may overlook something when reviewing, but the same thing often happens when people work together. Personally I'd rather do architecture and review at a significantly improved speed than gloat I handcrafted each loop and branch as if that somehow makes the result safer or faster (exceptions apply, ymmv).


"I feel like these distinctions are equivalent to

    LLMs can make mistakes. Humans can't."
No, that's not it. The difference between humans and AI is that AI suffers no embarrassment or shame when it makes mistakes, and the humans enthusiastically using AI don't seem to either. Most humans experience a quick and viseral deterrent when they publish sloppy code and mistakes are discovered. AI, not at all. It does not immediately learn from its mistakes like most humans do.

In the rare case when there is a human that is consistently persistently confidently wrong like AI, a project can identify that person and easily stop wasting their time working with that person. With masses of people being told by the vocal AI shills how amazing AI is, projects can easily be flooded with confidently wrong aaI generated PRs.


> LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.

in my experience these tests don't test anything useful

you may you have 100% test coverage, but it's almost entirely useless but not testing the actual desired behaviour of the system

rather just the exact implementation


The 100% test coverage metric is far more then "entirely useless" it is typically incredibly harmful.

Brittle meaningless tests tend to lock bad decisions in, and prevent meaningful refactoring.

Bad tests simply are code debt, and dramatically increase the future cost of rework and adaptation.


If unit tests are boring chores for you, or 100% coverage is somehow a goal in itself, then your understanding of quality software development is quite lacking overall. Tests are specifications: they define behavior, set boundaries, and keep the inevitable growth of complexity under control. Good tests are what keep a competent developer sane. You cannot build quality software without starting from tests. So if tests are boring you, the problem is your approach to engineering. Mature developers dont get bored chasing 100% coverage – they focus on meaningful tests that actually describe how the program is supposed to work.


> Tests are specifications: they define behavior, set boundaries, and keep the inevitable growth of complexity under control.

I set boundaries during design where I choose responsibilities, interfaces and names. Red Green Refactor is very useful for beginners who would otherwise define boundaries that are difficult to test and maintain.

I design components that are small and focused so their APIs are simple and unit tests are incredibly easy to define and implement, usually parametrized. Unit tests don't keep me "sane", they keep me sleeping well at night because designing doesn't drive me mad. They don't define how the "program" is supposed to work, they define how the unit is supposed to work. The smaller the unit the simpler the test. I hope you agree: simple is better than complex. And no, I don't subscribe to "you only need integration tests".

Otherwise, nice battery of ad hominems you managed to slip in: my understanding of quality software is lacking, my problem is my approach to engineering and I'm an immature developer. All that from "LLMs can automate most of the boring stuff, including unit tests with 100% coverage." because you can't fathom how someone can design quality software without TDD, and you can't steelman my argument (even though it's recommended in the guidelines [1]). I do review and correct the LLM output. I almost always ask it for specific test cases to be implemented. I also enjoy seeing most basic test cases and most edge cases covered. And no, I don't particularly enjoy writing factories, setups, and asserts. I'm pretty happy to review them.

[1] https://news.ycombinator.com/newsguidelines.html Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.


Oooo, I remember Garry Newman! I found his UI library GWEN (GUI Without Extravagant Nonsense) when I was still in uni and working on my game engine. It's been abandoned for 9 years now, nice to see he's still working on cool tech.

[1] https://github.com/garrynewman/GWEN


He also made the UI framework used by S&box. It's based on HTML/CSS + Razor but is not rendered with a browser.

https://sbox.game/dev/doc/systems/ui/razor-panels/


I assume the content isn't as important as the fact the object itself is the original. Original paper, original ink, original release date. The object itself comes from the original factory, survived through time etc. I would expect some tests will verify it uses the correct paper, has the signs of age, etc.

Even if you could duplicate it down to the molecule I would assume it wouldn't hold the same value since it doesn't have the same history. Assuming you'd want to sell it in good faith as a replica.


It also had the absolute worse monetization scheme in history with the general sentiment being they abused every dark pattern and made the experience horrible.


Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.


A serious one yes.

But why would a serious person claim that they wrote this without AI when it's obvious they used it?!

Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work.


I fail to see the point you're trying to make or the argumentative process by which you're trying to make it.

Consider this: the current administration has received gifts from private corporations in return for more lenient tariffs. Or consider the amount of law projects passed through congress directly from large corporations with their logo still on the paper. And this is just the blatant tip of the iceberg the current administration is brazen enough to show publicly.

> absolutely baffling to you and which probably makes a hundred obvious counter-arguments pop up in your head.

I can probably find a hundred obvious examples of conflict of interests, quid-pro-quos, or otherwise pro-corporation anti-consumer for any administration in history. But in the end, the proof is in the pudding. The rich are getting richer, the poor are getting poorer, while the government is issuing gold cards with the president's face on them for multi-millionaires to bring their business.

I'd be glad to hear a few of those hundred obvious counter-arguments.


I don't particularly subscribe to any ideology that puts companies above people, but it isn't hard to see things from GP's point of view. Before shooting the messenger, it is the mark of an educated mind to be able to entertain a thought without accepting it.

>Consider this: the current administration has received gifts from private corporations in return for more lenient tariffs

Who better understands where capital restrictions should be applied: this current administration (aka. Trump) or the businesses that grew large enough to buy a seat at the table or can afford to steer policy via "gifts"?

>Or consider the amount of law projects passed through congress directly from large corporations with their logo still on the paper.

Is a person sitting in congress fully cognizant of what is happening in all facets of the economy and have an understanding of what needs to be implemented today to pave the way for the next 10 years and beyond? Why would we not seek input from the industries requiring regulation?

>The rich are getting richer, the poor are getting poorer

Yet this is a golden age by every measure. Zoom out on your timescale and there has never been a more prosperous and peaceful time to be alive. Quality of life has tremendously improved and the possibility of striking out on your own and making it big has never been more attainable. Yes, there will always be people sitting at the top with massive power and wealth, but the average person isn't doing too bad.


> Who better understands where capital restrictions should be applied

Should be applied for what purpose? What's the purpose of the government and what's the purpose of a corporation? When the latter is strongly influencing the former, why is it difficult to entertain the idea that their purpose and interests align?

> Why would we not seek input from the industries requiring regulation?

There's a difference between seeking and weighing input, and simply passing along legislature proposals without even looking at it. This is a strawman.

> Yet this is a golden age by every measure.

This isn't guaranteed to improve or even remain forever. And it really depends where you look. Plenty of war, misery and suffering to go around. And even in safe countries the lack of education, healthcare, financial stability is causing enough stress that people start favoring authoritarian options. That's not a great sign for the future. Just because most of us are doing better than our ancestors, doesn't mean we're going in the right direction or that we're doing the best we can. No progress is achieved by being content with the status quo, and the present is pretty miserable for a lot of people. Should we wait until a terrible war wipes out half the planet before we consider maybe changing things?

But I think that's beside the point. The argument being discussed is whether corporate entities parabolically "are" the government.


Indeed a world in which you can't tell truth from lies and proof from ai generated videos suits this administration perfectly. Masked men with no identification kidnap people off the streets and you can't tell if it's real or fake. No wonder the government loves openai. Just look at how happy they are to use it on truth social.


Actions speak to the character of a person. And some actions have long lasting consequences.


Or at least coherent.


> If the were released every other day, people who wanted to do them for 12 straight days could not.

If they instead waited 12 days, they could start with the 6 puzzles already released, and then have enough puzzles to solve once a day for the next 12 days.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: