I just watched a YC video about vibe coding. He says he used Claude Sonnet 3.7 to configure his DNS servers & Heroku hosting. What could possibly go wrong?!
I managed to break production before vibe coding was cool.
20 years old me had rm -fr root access to way too many systems.
I don't think it's much different today.
If anything, I think the youngsters will learn faster from their mistakes because they already have good mentor for the easy stuff and will get grinded on the hard stuff sooner.
Maybe. One thing I've found is the LLMs give me a lot of short term leverage. I can (get an LLM to) program in a language I know almost nothing about. Then I hit a wall where I don't know enough to get the LLM to fix something I don't know how to ask for, and then I'm stuck. When vibe coding, I'm not learning how to solve problems myself so much, and that means anything the LLM can't do, I also can't do. When I'm coding on my own, I'm picking up lessons along the way the help me build a foundation for incrementally harder projects.
> If anything, I think the youngsters will learn faster from their mistakes because they already have good mentor for the easy stuff and will get grinded on the hard stuff sooner.
Maybe? Back when I had to troubleshoot coaxial network terminations uphill both ways in the snow, we had to learn how things actually work (e.g., what's in a tcp header, how to read an RFC) and that made debugging things a little more straightforward.
I'm pretty insulated from young developers these days, but my limited experience is that they might know the application, presentation and maybe session layers (if using the old OSI model) but everything below that is black magic.
Sure? But I think the point stands. And why would anyone think that a terminator would go bad until you’ve tried everything else and finally you swap them out and all of a sudden the network chatter stops and things go back to normal? It’s an experience thing and I worry that most of us are too abstracted from the underlying systems these days to properly understand them. It’s pretty common in my other world (info sec policy).
I'm not so sure. I did an experiment last year. I was writing a prototype in a language I'm not well versed in, using unfamiliar tools and only somewhat familiar libraries. I made heavy use of ChatGPT and other tools to get the job done. (Well, actually, I wrote several pieces of software this way, but only one major for-work prototype).
Some observations:
Initial velocity was high. It felt like I was getting things done. This was exciting and satisfying.
The code I wrote was structurally unsound as it ended up having no overall plan. This became more and more evident as the prototype was completed. Yes, it was a prototype, but usually I use prototypes to think about structure. (Choosing the right way to structure a problem is a key part of solving the problem)
My retention of new knowledge was terrible. Ditto for developing any deep understanding of the tools and libraries I was using. I had skipped the entire process by which I usually learn and just had the LLMs tell me solutions. This provided a poor basis for turning the prototype into production code because I had to catch up on all the thinking that should have been part of producing the prototype. And I had to sit down and read the documentation.
LLMs are only as good as your prompts. You actually need to know quite a bit in order to formulate good prompts. Being a fairly experienced programmer (and also having a some knowledge of LLMs and a former career in web search) I had significant advantages novices do not have.
Now imagine a novice who lacks the experience to see the shortcomings of their code (hey, it works doesn't it!?) and the ability to introspect and think about why they failed.
Half of the time the LLM would lead me astray and pursue paths that resulted in poor solutions when better solutions would have been more obvious if I'd just read documentation. It is easy to become focused on exhausting paths that the LLM sent you down in the first place and poke it to give you something that works.
I have nearly 40 years of programming experience. It scared me how stupid relying too much on a LLM made me. Yes, it got me started quickly and it feels faster. However the inflection point that comes some time into the learning process, where things start to click, didn't materialize. Mostly I was just poking the LLM in various ways to make it spit out solutions to tiny parts of the problem at a time.
I still use LLMs. Every day. But I use it more as a fancy search engine. Sometimes I use it to generate ideas - to try detect blind spots (solutions I don't know about, alternative approaches etc). Having been burnt by LLMs hallucinating I consistently ask LLMs to list their sources and then go and look at those.
LLMs are *NOT* mentors. A mentor isn't someone who does the thinking and the work for you. A mentor is also not an alternative to reference material or the means by which you find information. You're expected to do that yourself. A mentor is not someone who eliminates the need to grind through things. You have to grind through things to learn. There is no alternative if you are going to learn.
A mentor is someone who uses their experience and insight to help your personal growth.
Because some human errors are bound to happen, what is often missing is procedures to minimize them, catch them or prevent them from having catastrophic effects.
It is not even about being technical. Have a person put data on a spreadsheet and you can get into so many errors if the procedure of doing that is bad.
> It's a very common cop out by engineers/technical folks when something goes wrong somewhere else, to blame it on management/deadlines/customers/etc.
Look, I've been an executive and a management consultant for a long time now (started as a sysadmin and developer), and it's quite often the case that everything was late (decisions, budgets, approvals, every other dependency) but for some reason it's ok that the planned 4 months of development is now compressed in to 2.5 months.
I have been involved to some degree or another in probably close to 300 moderately complex to highly complex tech projects over the last 30 years. That's an extremely conservative number.
And the example I describe above is probably true for 85% of them.
Engineers aren't shy about eviscerating each other's work when mistakes are made—sometimes too eager, frankly.
Whole courses are built around forensically dissecting every error in major systems. Entire advanced fields are written in blood.
You probably don't hear about it often because the analysis is too dense and technical to go viral.
At the same time, there's a serious cultural problem: technical expertise is often missing from positions of authority, and even the experts we do have are often too narrow to bridge the levels of complexity modern systems demand.
How do we know that it was just a "human error" as the article/headline implies?
Answer: we do not know either, but this is the standard response so that companies (or governments or whoever this concerns) are absolved of any responsibility. In general, it is not possible to know for a specific case until a proper investigation is carries out (which is rarely the case). But most of the times, experience says that it is company policies/procedures that either create circumstances that make such errors more probable, or simply allow these errors to happen too easily due to lack of enough guardrails. And usually it is due to push for "shipping products/features fast" or similar with little concern to other regards.
It could be a different case here, but seeing it is about oracle and having in mind that oracle is really bad at taking accountability about anything going wrong, I doubt it (very recently they were denying a hack on their systems and the leak of data of companies that use them until the very last moment).
Must be engineers who write requirements and unrealistic deadlines that lead to such issues.