It's disappointing that this is clearly being downvoted due to disagreement - it's a valid perspective. We have very little evidence of the overall impact of aggressively generating code "in the wild" and plenty of bad examples. No one knows what this ends up looking like as it continues to meet reality but plenty are taking a large productivity improvement as a given.
>Here are some well known names who are now saying they regularly use LLM's for development. For many of these folks, that wasn't true 1-2 years ago:
This is a huge overstatement that isn't supported by your own links.
- Donald Knuth: the link is him acknowledging someone else solved one of his open problems with Claude. Quote: "It seems that I’ll have to revise my opinions about “generative AI” one of these days."
- Linus Torvalds: used it to write a tool in Python because "I know more about analog filters—and that’s not saying much—than I do about python" and he doesn't care to learn. He's using it as a copy-paste replacement, not to write the kernel.
- John Carmack: he's literally just opining on what he thinks will happen in the future.
You're going to get a lot of "skill issue" comments but your experience basically matches mine. I've only found LLMs to be useful for quick demos where I explicitly didn't care about the quality of implementation. For my core responsibility it has never met my quality bar and after getting it there has not saved me time. What I'm learning is different people and domains have very different standards for that.
Honestly I don't think so. An essay like this is more than just content, it's an experience for the reader. I value the time I got to spend with it and feel I came way with value that a summary or condensed version would just not have had.
>I had actually just been told by management this last week that I need to become AI 'fluent' as part of future performance evaluations and I have been deeply conflicted about it.
I hear this and FWIW, if there aren't very specific things being asked of you, using AI as a stack overflow replacement as the OP admits to doing is as "AI fluent" as anything else in my book.
>The rent-a-brain aspect is more acutely alarming. And I will be blunt here: It sure does seem like the prolonged use of LLMs can reliably turn certain people’s minds into mush...
>Stop me if you’ve heard this one before: “After [however long] using AI coding assistants, there’s no way I’m going back!” You know, I don’t doubt that this is true. Because I’m not sure some of the people who say this could go back. It reads like praise on the surface, but those same words betray a chilling sense of dependence.
Someone taking over a project and working directly in it can build up their own deep understanding about it over time even if they didn't write it all. Documentation from the last expert can help, or just reading and changing things as you build up a mental model. But asking an LLM to change it for you will not arrive at the same place.
>Heck, I often don't remember anything about code I wrote six months ago. It might as well have been written by someone else.
This just isn't true at all in my experience. Do I remember every detail of code I haven't looked at for six months? No, but I can go back and recall pretty quickly how it's structured and find my way around. I'm much more able to do that with code I wrote and thought deeply about. It's like riding a bicycle - if you invested in building up your knowledge once, you can bring it back more easily.
LLMs can sometimes help you to understand someone else's code but they can also hallucinate and I think people gloss over how frequently this happens. If no one actually understands or can verify what it's saying, all I can say is good luck.
Completely resonate with this. There don't seem to be many of us, at least in my online bubble, but you're not alone.
I believe and hope eventually we'll come around to valuing people who have put in the work - not just to understand and review output but to make choices themselves and keep their knowledge and judgement sharp - when we fully realize the cost of not doing so.
I value people who put in the work. I also value being able to make a little one-off single use gadget without having to spend a week doing remedial python every few months. I can understand it once it's written but writing it is a separate skill.
Of course, having learned a few languages, understanding data types, knowing to prompt it for idiomatic code and check against best practices, etc is vital to being able to do that. The basic skills need to be developed even if not everyone gets the same value out of being able to write code.
I think throwaway use cases have very different requirements than products we expect to maintain and need to be treated differently. Go nuts with AI to generate a chart or a one off tool or whatever, if you don't care about deepening your skill to do those things yourself.
reply