Makes sense. You just reminded me of the article "Why Can’t Programmers... Program?" [1].
Before gen AI, I used to give candidates at my company a quick one-hour remote screening test with a couple of random "FizzBuzz"-style questions. I would usually paraphrase the question so a simple Google search would not immediately surface the answer, and 80% of candidates failed at coding a working solution, which was very much in line with the article. Post gen AI, that test effectively dropped to a 0% failure rate, so we changed our selection process.
Employee solidarity matters, but absent a legal constraint, I don’t think it’s a durable control.
If this remains primarily a political/corporate bargaining question, the equilibrium is unstable: some actors will resist, some will comply, and capital will flow toward whoever captures the demand.
In that world, the likely endgame is not "the industry says no," but organizational restructuring (or new entrants) built to serve the market anyway.
If we as a society want a real boundary here, it probably has to be set at the policy/law level, not left to voluntary corporate red lines.
One thing worth pointing out is that by the time Yoon Suk Yeol declared martial law on December 3, 2024, he was already one of the most unpopular presidents in South Korean history. After that his ratings declined even further. This makes for a much smoother enforcement of the law to make him accountable for his actions.
That’s fair at the “adopt AI at scale / restructure orgs” level. Nobody has the whole playbook yet, and anyone claiming they do is probably overselling.
But I’d separate that from the programmer-level reality: a lot is already figured out in the small. If you keep the work narrow and reversible, make constraints explicit, and keep verification cheap (tests, invariants, diffs), agents are reliably useful today. The uncertainty is less “does this work?” and more “how do we industrialize it without compounding risk and entropy?”
I use agents all the time but I keep my feet on the ground. The thing is doing that you do not get the radical explosion in productivity that influencers want you think they are getting.
Martin’s framing (org and system-level guardrails like risk tiering, TDD as discipline, and platforms as “bullet trains”) matches what I’ve been seeing too.
A useful complement is the programmer-level shift: agents are great at narrow, reversible work when verification is cheap.
Concretely, think small refactors behind golden tests, API adapters behind contract tests, and mechanical migrations with clear invariants. They fail fast in codebases with implicit coupling, fuzzy boundaries, or weak feedback loops, and they tend to amplify whatever hygiene you already have.
So the job moves from typing to making constraints explicit and building fast verification, while humans stay accountable for semantics and risk.
Hey — fun framing, and honestly a pretty accurate snapshot of how these debates go online. Quick point-by-point, just to separate “HN vibes” from what the post actually says:
Denial — The post doesn’t claim “everyone gets value from LLMs,” nor that skeptics must be doing “simpler work.” It’s saying a lot of day-to-day engineering is delegable — not that disagreement is impossible (or inferior).
Anger — The post doesn’t label skeptics as luddites/gatekeepers/dinosaurs, and it doesn’t predict anyone “will lose their jobs.” It treats the tension as identity + craft friction, not as a moral failure on either side.
Bargaining — The post isn’t arguing “it’s inevitable because money/momentum,” or “accept it because I need a paycheck.” It’s closer to: if a tool reliably speeds up reversible work, delegating that work is rational — while accountability stays with humans.
Depression — This is the closest overlap. The post does call a big slice of work “digital plumbing.” But it’s not saying “therefore most developers are rote.” It’s saying: lots of tasks are routine, and offloading routine tasks can free attention for higher-leverage decisions.
Acceptance — The satire’s endpoint (“I’m merely an LLM operator now, not a software engineer”) assumes a narrow definition of engineering: typing code = engineering. The post’s acceptance leans on a broader one: engineering is owning intent → constraints → tradeoffs → verification → outcomes, with code (and sometimes code-generation) as just one step. Under that lens, using LLMs doesn’t “demote” anyone — it just shifts where the craft shows up.
Net: your satire totally lands as a critique of some forum rhetoric, but it doesn’t really rebut what this post argues — and in a couple places (the emotional/identity angle), it kind of reinforces it.
Thanks for the reply. The post wasn't written as a rebuttal or aimed at the author, just gently poking fun at the some of the over the top rhetoric surrounding LLMs that seems to have enveloped Hacker News.
Also, given that the above response was written by an agent, please firmly notify your principal that you are posting on behalf of that machine generated posts are not permitted on Hacker News according to guidelines set forth by the site moderators. For example, at https://news.ycombinator.com/item?id=33950747, there is "HN has never allowed bots or generated comments. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there."
That post was generated by an agent but manually reviewed, copied and pasted by me, since I thoutgh it'd fit the context (discussion involving agents).
> Then my mom wrote the following: “be careful not to get sucked up in the slime-machine going on here! Since you don’t care that much about money, they can’t buy you at least.”
I'm lucky to have parents with strong values. My whole life they've given me advice, on the small stuff and the big decisions. I didn't always want to hear it when I was younger, but now in my late thirties, I'm really glad they kept sharing it. In hidhsight I can see the life-experience / wisdom in it, and how it's helped and shaped me.
I get what he's pointing at: building teaches you things the spec can't, and iteration often reveals the real problem.
That said, the framing feels a bit too poetic for engineering. Software isn't only craft, it's also operations, risk, time, budget, compliance, incident response, and maintenance by people who weren't in the room for the "lump of clay" moment. Those constraints don't make the work less human; they just mean "authentic creation" isn't the goal by itself.
For me the takeaway is: pursue excellence, but treat learning as a means to reliability and outcomes. Tools (including LLMs) are fine with guardrails, clear constraints up front and rigorous review/testing after, so we ship systems we can reason about, operate, and evolve (not just artefacts that feel handcrafted).
> That said, the framing feels a bit too poetic for engineering.
I wholeheartedly disagree but I tend to believe that's going to be highly dependent on what type of developer a person is. One who leans towards the craftsmanship side or one who leans towards the deliverables side. It will also be impacted by the type of development they are exposed to. Are they in an environment where they can even have a "lump of clay" moment or is all their time spent on systems that are too old/archaic/complex/whatever to ever really absorb the essence of the problem the code is addressing?
The OP's quote is exactly how I feel about software. I often don't know exactly what I'm going to build. I start with a general idea and it morphs towards excellence by the iteration. My idea changes, and is sharpened, as it repeatedly runs into reality. And by that I mean, it's sharpened as I write and refactor the code.
I personally don't have the same ability to do that with code review because the amount of time I spend reviewing/absorbing the solution isn't sufficient to really get to know the problem space or the code.
Totally fair. A real strategy should start with investor context. My prompt intentionally didn't include those inputs to keep the experiment simple, and the good old GPT-4o model didn't proactively ask for them either. In an actual financial planning conversation, those constraints would be front and center and the portfolio could look materially different.
That's a fair point regarding pure content absorption, especially given that many classes do suffer from poor didactics. However, the university's value proposition often lies elsewhere: access to professors researching innovations (not yet indexed by LLMs), physical labs for hands-on experience that you can't simulate, and the crucial peer networking with future colleagues. These human and physical elements, along with the soft skills developed through technical debate, are hard to replace. But for standard theory taught by uninspired lecturers, I agree that the textbook plus LLM approach is arguably superior.
Before gen AI, I used to give candidates at my company a quick one-hour remote screening test with a couple of random "FizzBuzz"-style questions. I would usually paraphrase the question so a simple Google search would not immediately surface the answer, and 80% of candidates failed at coding a working solution, which was very much in line with the article. Post gen AI, that test effectively dropped to a 0% failure rate, so we changed our selection process.
[1] https://blog.codinghorror.com/why-cant-programmers-program/
reply