Hacker Newsnew | past | comments | ask | show | jobs | submit | dasil003's commentslogin

Define interesting. In my experience most business logic is not innovative or difficult, but there are ways to do it well or ways to do it terribly. At the senior levels I feel 90% of the job is deciding the shape of what to build and what NOT to build. I find AI very useful in exploring and trying more things but it doesn’t really change the judgment part of the job.

I agree that the way that today's generation of seasoned programmers learned their craft is going away, and that we don't know how the next generation will learn. I disagree very much with the conclusion:

> They didn’t trade speed for learning. They traded learning for nothing. There was no trade-off. There was just loss.

I believe this conclusion is due to a methodological problem, a form of begging the question if you will. One thing I am certain of is that humans who set their mind to something learn something, and good programmers are among the most tenacious in setting their mind to something. With agentic coding, they definitely learn different things, and so I would expect syntax knowledge to be weaker, but debugging and review skill will increase overall. Why? Because there will be more code, and more breakage, and I still haven't seen any tooling that allows a non-technical person to be effective at this.

Programming knowledge has always had a half-life. The way I see it, this is a big sea change that will fundamentally change the job of software engineers, and some non-trivial percentage will either change careers or find a sheltered slow-moving place to finish out their working years. But for those who were not attached to hand-crafted code, AI provides power tools that empower technically minded people more than anyone else. I have full faith that younger generation still has the same distribution of technical potential, and they will still find ways to develop their craft just as previous generations of hackers have always done.


It's wild to me to hear this being spun as vanity, like it's some influencer clickbait or linkedin slop. You could argue anyone posting anything online is driven by vanity, but in this case we're talking about someone who took agency in his own medical outcome, and essentially experimented on himself. Sure it was selfish in the sense that he didn't want to die and he bent all his effort and resources to it, so what? I don't see exercising ones will-to-live in this way being a huge moral gray area. Other commenters are saying why don't we fund more research? Well sure we should do that too, but it's important to recognize that the type of approach he took here only works because it was one individual willing to combine a significant amount of personal effort with his own moral authority to try out risky things on himself. Even with orders of magnitude more funding, you can't ethically do this kind of thing without the consent of the patients, and there's not enough data on these types of approaches to adequately describe the risks to patients if they aren't specifically motivated to lean into the details like this guy did.

Because it's only an option if you're rich and most of us aren't rich.

Others won’t benefit ?

Agreed. One thing I’ve found striking is how far LLMs can get with pure language and the recognition that humans often operate with a similar kind of abstract conceptual reasoning that is purely language based and pretty far removed from facts and tenuously connected to objective reality. It takes a certain kind of mind to be curious and unpack the concepts that most of us take for granted most of the time. At best people don’t usually have time or patience to engage in that level of thinking, at worst it can actively lead to cognitive dissonance and anger. So of course a consumer chatbot is not going to be tuned to bring novel insight, it must default to some level of affirmation or it will fail as a product. One who is aware of this can work around it to some degree, but fundamentally the incentives will always push a consumer chatbot to essentially be junk food for the brain.

This is a weird hill to die on. As much as I resonate with many of the concerns, I don't see refusing to use AI as something that will actually help any of those things. Forking a stable version of vim is something I guess, but I don't really see the sky falling with mainline vim or neovim.

Personally the leverage I have as a bit of a cranky graybeard myself is that I understand how software works and I can distinguish between good and bad uses of AI and think critically about how to influence things towards better software. Just declaring AI as unequivocally bad and evil will do nothing more than make me irrelevant. At some point being right is useless without some measure of also being effective.


I love your take, but I had a hard time getting through the article because “science of entrepreneurship” already feels like a contradiction in terms to me. Each startup is a product of its unique time and context. There are huge swings in fortune based on seemingly subtle factors that are not necessarily under the control of the founder but need to recognized and can force the whole vision, approach, or even the problem statement itself to shift wildly overnight. This happens over and over again, and so creating a successful startup is more akin to bull riding than any formulaic process.

Because entrepreneurship and markets overall are at the center of so many disparate human contexts, I just don’t think the scientific method is particularly applicable. I also think the minute you try to generalize between startups the fidelity of understanding the factors of success fall off exponentially. The most common failure mode—almost by definition—is failing to recognize that some seemingly good idea or pattern that worked in many other businesses just does not work in this particular context for whatever reason. This is why entrepreneurs that are too focused on theory and not enough on the details of their particular space tend to fail.

To me, The Lean Startup is useful food for thought, and can be useful in surprising ways (even in bigger companies), but the generalized ideas and statements are of very little value without a keen sense of applicability in context. Any “science” of entrepreneurship would basically be combining the systemic chaos of macroeconomics less the precision of standard financial metrics plus all the human factors of psychology. Fascinating to think about, but I doubt the best pundits and theorists would themselves make good entrepreneurs.


These rules aged well overall. The only change I would make these days is to invert the order.

Number 5 is timeless and relevant at all scales, especially as code iterations have gotten faster and faster, data is all the more relevant. Numbers 4 and 3 have shifted a bit since data sizes and performance have ballooned, algorithm overhead isn't quite as big a concern, but the simplicity argument is relevant as ever. Numbers 2 and 1 while still true (Amdahl's law is a mathematical truth after all), are also clearly a product of their time and the hard constraints programmers had to deal with at the time as well as the shallowness of the stack. Still good wisdom, though I think on the whole the majority of programmers are less concerned about performance than they should be, especially compared to 50 years ago.


Having taken a startup from 2 to 100 a lot of the comments here resonate, but now that I'm in a company of 10,000 and trying to get things down across wide areas, I would say efficiency and communication overhead are all relative and make sure you're solving the problems in front of you and not "playing house" on problems you anticipate having down the line.

You will need more structure than you had before. For instance at 15 the idea of managers is silly, everyone still needs to be contributing individually, the only thing is that you do need to minimally subdivide work so that everyone isn't doing everything. Be wary of process for processes sake, you will start to need some but you really want to stay focused on concrete progress; is everyone doing the most important thing possible at every given moment? How fast are you shipping changes, closing sales, etc? Also make sure you have the right people, don't get starstruck by big tech vets. They have many skills that will be useful—if you are phenomenally successful—but if they don't have startup experience they likely will overengineer things by default, and a significant percentage of them can not wipe their own ass without the best-in-class tooling and infra support teams that allowed them to focus purely on one domain at BigCo. Basically you need pragmatic hustlers, veterans are good so long as they aren't cathedral or empire builders. Also watch out for weird team dynamics and nip any toxic interactions in the bud, one bad apple really can spoil the barrel; that's probably the most important thing to watch out for in the 15-100 range.


I suspect it’s not about agentic coding being a special skill, and more about why a competent programmer wouldn’t have tried it by this point, and whether that is a sign of ideological objections that could cause friction with the team. Not saying I agree with that thinking, but I definitely see why a hiring manager could think that way.


I can't get into that hiring manager's head. It shouldn't matter, if the candidate can deliver business value. That's what you are hiring them for. You're not hiring them to burn LLM tokens, you're hiring them to create business value. Why would you care if he does it by hand-coding, using an LLM, or chanting magic spells at the computer?


I was only granted permission to use it a few weeks ago and haven’t had time to set it up yet


> why a competent programmer wouldn’t have tried it by this point

What does one have to do with the other? Since when is following every fad a prerequisite for competence?


Don't shoot the messenger, I'm just telling you how some hiring managers might think, not endorsing the opinion and it's definitely not something I consider in my hiring.

I will say it's a little weird to frame it as "every fad" though. Do you really not see any net new or lasting utility for software engineering in AI tools? If not then more power to you, but software engineering being a fast-moving field where there are (fair or unfair) expectations to keep up is nothing new.


I certainly keep an eye on these developments, but I think the jury is still out on how useful/beneficial they actually are in practice. Generating more code in less time is not a useful measure of productivity for me.


Agree the jury is still out, but "More code in less time" is a shallow strawman. The better question is what is it good at and what is it not good at, and what are the ways to best leverage those capabilities. I've seen enough use cases from enough engineers now that I firmly believe anyone saying "nope never useful" is sticking their head in the sand.

If you aren't taking advantage of it, you are not a competent software engineer in 2026.


On the contrary. That's the only kind of competent software engineer in 2026. Competent engineers don't hand things off to the tool that generates terrible code really quickly.


Many companies cannot take advantage of it. Not everyone is making toy CRUD web applications to help consumers purchase things they don't want. Some people are making safety critical applications, and many more are making highly sensitive applications.

At my job, we just got agents. Because we had to self-host them in our new data center. Our product isn't the kind that can be used with Claude or Gemini, like, legally.


So you just said that you couldn’t use coding agents because you are doing “very important things”, while you clutch your pearls about what other people are doing but your company is in fact using coding agents…

I'm not clutching my pearls over anything, and our applications aren't important. They're mostly stupid and bad, but because of laws we can't just hand out that data to anybody with a pulse.

I think using agents is great. I'm just saying that lots of people haven't been using them not because they're dumb or a (sigh) luddite or whatever, but because they can't.

The economy is really big, and while most companies play fast and loose with data, many don't. Of the ones that do, most of them probably should not, but because it's not technically illegal, of course they do. That principle is a big part of the reason why the US sucks.

For us though, it is actually illegal, so we don't. And we're not the fucking CIA or something, again, we make bad stupid products. So, if that's the case for us, that's the case for a lot of companies.

We legitimately had to buy a new datacenter for this shit.


Claude has been a big boost to my sense of competency. I get to point out so many poor solutions in slop PRs now


Looking for 5 seconds at the github profile I see a bunch of music-related stuff, and also a bunch of contributions to private repos that we have no idea what they are. I get the productivity guru anti-pattern, but I honestly don't know what you're looking at that merits this kind of reflexive personal attack.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: