AI is being pushed so much at work right now. For non-dev stuff even. The amount of things that people think are "awesome never seen this" is staggering.
Just because you haven't seen file format X converted to file format Y before and now you asked the LLM to do it and it worked, doesn't mean you needed an LLM for it nor that it's remarkable. The LLM knew how to do it because it learned from a bazillion online sources for deterministic converters that cost nothing (and have open source). But now you're paying, every single time, for a non-deterministic version of it and you find it cool. It's magic ...
you'll be surprised with how many people are comfortable attributing something they do not understand to Magic.
more than anything, ai let people who couldn't and wouldn't bother to learn to write simple code, to side step ones who can and build solutions to scratch their own itch. that too faster.
now human behavior kicks in, and they don't want to hand control back into the hands of people who can code to solve problems.
put this together and you have a good model to understand the AI sales pitch... Its magic
Oh, yes! As someone who has dabbled in card tricks, this so much. People don't understand how its done and can't imagine or conceive of a way that it possibly could be done, so they attribute it to literal magic or demons or whatever. Like, no, I just distracted you for a split second and used sleight of hand.
Technology is no different: someone has never even considered that this thing could be possible, and now they see it with their own eyes? Incredible! They don't realise that its mundane and has been possible (in much cheaper ways) for a long time. It was like a few years ago when some journalist posted an animation showing how Horizon Zero Dawn does frustum culling and all the non-tech people were all "wow! This game unloads the game world when its not in view! Incredible!", like... yeah? That's how games have worked since the advent of 3D?
If that is your manager, do so, sure. But make sure your manager is "such a manager".
If I was your manager, and you sent me your seventeen page AI generated thing coz you think I'm just gonna summarize anyway and I expect something long: You misread me.
I make a point all the time to everyone that won't listen, to not send me walls of text. I'm not gonna read them. I'm gonna ignore them, close your bug reports until I can understand them because you spent the time to make them short and legible. If you use AI for that, I don't care. But I better have something short and that when I read it makes actual sense and when I verify it, holds up. If I wanted to just ask AI, I'd do it myself. You have to "value add" to the AI if you want to be valuable yourself.
I agree. I send 2 sentence replies to most things my bosses boss sends me. He’s near retirement, dude doesn’t want me to send him a book. He knows the thinking under the work our team is doing is solid.
The only time I send something longer is if it’s a postmortem for some prod issue, which I write by hand.
I use AI every day, often multiple agents at once, but knowing when it’s appropriate and when I need to be the one thinking really hard about something.
TL;DR: If it's not cached, does it really matter if it's offline for some time?
Long version:
If you're so popular all around that you really really want a very very short TTL, people will query all the time from all the places that "count", won't they? So it's gonna be cached.
If you're not so popular or not all around, what does it matter even if you had a very very short TTL? You're not loosing much.
capitalism can work with say 99% tax on estate on death. No trust funds. Tax on wealth above a certain point. Rule of law with sharp teeth. Proper investment in education. Proper anti monopoly so all large corporations gets broken up to avoid their power consolidation...
communism is dictatorship in disguise.
then you have old style feudalism with aristocracy.
I know you're trying to be funny but ... technically it's 100% clear: You should talk HTTP, because that's the URL scheme here. The port makes no difference. You just happened to use a port by name. For all we know I run my HTTP server on some NFS related port so all the script kiddies try all the wrong exploits on it or something ;)
“people like me are still needed” is just a desperate form of self-persuasion.
No, no it's not. I've seen what "PM armed with an LLM" will do. Trust me, if you're a decent enough Full Stack software engineer that can take an idea and run with it to implement it, you'll have a leg up over the PM with the idea that has no idea how to "do computers".
Most of what these PMs can produce nowadays turns boardroom heads, sure. But it's just that: visuals and just enough prototype functionality that it fools the people you're demoing to. Seen enough of these in the recent past.
Will there be some PMs that can become "software developers" while armed with an LLM? Sure!
But that's not the majority. On the other hand, yes there are going to be "software developers" that will be out of a job because of LLMs, because the devs that were FS and could take an idea from 0-1 with very little overhead even in the past can now do so much faster and further without handing off to the intermediates and juniors. They mentor their LLM intern rather than their intermediates and juniors. The perpetual intermediate devs with 20 years of experience are the ones that are gonna have a larger and larger problem I'd say.
The Staff engineer that was able to run circles around others all along? They'll teach their LLM intern into an intermediate rather than having to "10 times" a bunch of perpetual intermediates with 20 years of experience.
I agree with you overall, yet there’s one flow that works for me.
Instead of speccing out a feature, I let PMs vibe code it.
I then have the exact reference I need to build.
Maybe LLMs oneshotted the right way, maybe it needs fixes, maybe some fundamentals are misunderstood, in any case it’s easier for me to know what I need to build, for the PM to be aware of some limitations (LLMs do the job of pushing back and explaining) and overall for us to have to the point conversations.
It is somewhat orthogonal to what you say, when you focused on dev seniority, so that part stands true.
But I think “PMs armed with an LLM” can, when properly used, add a lot of value to the dev process.
> I agree with you overall, yet there’s one flow that works for me. Instead of speccing out a feature, I let PMs vibe code it. I then have the exact reference I need to build.
Like BDD, but with something more accessible than Cucumber. I'm totally here for that.
It would be nice if people also committed their initial prompt and chat session with the LLM into their codebase. From a corporate standpoint, having that would be excellent business logic as code, if the code is coming from a PM or a stakeholder on the business side of the house. From an engineering standpoint, it would be an excellent addendum to the codebase's documentation.
FWIW, BDD and frameworks like Cucumber don't work at all in my experience. The people that'd need to fill these out don't do it properly (they can't) and then we, devs, are stuck with brittle and un-debuggable stuff that's worse than if we just used regular code to encode what we understood from them.
It's the same reason (most) PMs armed with an LLM still won't get anything usable done. They can't do it properly. They still need devs. But the gaps are shrinking. Some few PMs can get stuff done w/ both Cucumber, could wireframe UX with previous tools and can now do so much easier and better with an LLM.
It would be nice if people also committed their initial prompt and chat session with the LLM into their codebase
I doubt you'd want this. It's a chat session for a reason. It's gonna be huge wall of text, especially if you meant to actually include all the internal prompting the LLM did while it was working. You'd also have all my "no dude, stop bullshitting me! I told to ignore X and use Y and to always double check Z and provide proof".
It would only "work" if every single piece of feature you wrote was 100% written by the LLM from a single, largish and well defined prompt, the LLM works for a few hours and out comes the feature. And even then you have no reproducability (even if you turned around and gave it to the exact same model, no retraining, newer model, system prompt etc.).
There are ways to play around the single wall of test issue.
Mostly, git lfs.
When it comes to “no dude stop etc etc” … that is valuable information. You can extract that and put down rules for agents so that you stop repeating it each time.
Same can be done at PR, so that you can review not just the code but also how you got there.
It’s trivial to go from session to a nicely polished html with side by side conversation.
If you want to try, username at gmail, I have a private repo with it running.
I value critics, sorry for the plug ;)
Oh, on the different models side, i don’t see the advantage of reproducibility, or better, I don’t think I understand what you mean, can you help me see it?
I don't understand how "wall of text" is related to git large file support. The wall of text is a problem for me, the human. Sure, there are ways, like "be brief", caveman etc. In a large repo with lots of different people over time, I can't see how it won't just be wall of text again. It's just too much. TL;DR. And coz DR, the LLM will have buried bullshit in that text, which future session might read and "believe".
As for "no dude", no that can't be put down into rules. Not all of it anyway. We have stuff encoded in the repo wide md file, I have my personal one etc. and the various agents still don't do what we tell them to in all cases or a new model comes out and it no longer works. For example, for finding the root cause of a bug, it's very important to have actual proof and references. It's getting there w/ my instructions in the .md but it doesn't always work and I do have to "dude" it from time to time.
Is that back and forth valuable to have in files that are going to be part of the repo? I very highly doubt it. Having new rules that came out of the back and forth in a checked in AGENTS.md, sure, that is valuable. Or nowadays in individual "skills", like a "root cause finder" skill that can have very specific instructions about being thorough in proving its "found the smoking gun" BS ;)
I've seen enough PR descriptions created by the agent. Fluffy wall of text that looks good but is factually wrong. Seen it way too many times. Too many people just look at whether it looks good and then pass it off as truth. I'm tired of it and making that into "nice HTML" doesn't make it better. It just makes it look even nicer but not more true.
Re: reproducibility. My parent poster (and I guess you as well) wanted to have the prompt/conversation as "documentation". I don't see why that would be helpful. The only reason I could see would be for "reproducibility", which you won't get with an LLM. I don't see why else, but do tell me.
What I can agree could be valuable are the "why"s. I.e. the stuff that already should have been part of the ticket/requirements document. If you want to store that inside the repo as text files, instead of the original tickets or documents, that's fine of course. But I don't see how a "recording of how the code came to be" is valuable. It's like having a recording of all my IDE keystrokes and intermediate code state in pre-LLM days. Not valuable. What's valuable are the requirements and the outcome (i.e. code). Not "the thing in between".
Now don't get me wrong. Recordings of how people code/use their IDE can be a valuable teaching tool. Both as good and bad examples. And the same can be true for an agent coding session.
i misunderstood "wall of text" (i was thinking about bloating repo with it), my solution to understanding is just to create ad-hoc tools to parse the json
i coded a web ui with simple toggles: show me what user said, what llm said (nice to see what I was thinking about, nice to see how LLM came up with solution X, you get tools calls, maybe it found something i didn't think about or viceversa)
you can search/grep (.ie: did i consider idempotency when i build feature X? open session, search/grep idempotency)
you can, up to some point, resume the conversation (yes i know, cache busting makes some usages of this impractical, but in general resuming and asking "when we did this, did we think about that" tends to work... let's say that research is ok, time travel, meh)
overall, one of the advantages of LLMs is to be able to direct then ad data for insights, via standard CLI tools, via specific prompts, or building some mini tools (yeah vibe coding is fine sometimes)
whatever my question, if i have data i can have an answer
LFS helps with the second aspect (buried bullshit). Unless you smudge, you have a pointer, and that is just 3 lines.
You need to learn some ergonomics, but ok, some of us learnt how to use Jira XD
Taking your position a bit further, yes, committing chat sessions implies that you also need to review them so that bullshit doesn't filter through. Milage varies based on your personal preferences, which project you are working on, and many more heuristics.
Some will find it boring, some will think it's good project maintenance, all should be able to find a way to handle this based on their preference.
It is also nice to pointout that cleaning bullshit doesn't need to happen at merge. LFS blobs being stored separately, you can have side flows helping you out, without clogging yoou CI pipelines.
"no dude"-> rules
you can put down SOME rules
usually this happens to me at PR. I am tired fo saying "you should always check X", so i bolt it down "someplace".
I am running the usual motions i suppose most of us try to adopt: put this down in agents.md, in folder x or y, in path-scoped rules using agents, in memory files (i am exporting/importing those too), in subagents that review code before PRs.
in the end it's an unsolved problem at large, but
1. hopefully it will get better, my feeling is that it's just a cambrian explosion, and the fittest will survive... (also, owning the harness should help, i suppose .. i use claude code :D )
2. in a team, having personal styles surface is valuable. "dude don't do that" is quite often .... design. When rules go in the repo, at least we can find an agreement in person, and at PR is not about linking a document you read at onboarding, but finding out why the agent did not respect the rule. To me, that is more grounded in a tool.
3. rules are ... not static? We change our minds? We get better at things? We want to experiment? I am not advocating for a perfect rule system that replaces me, but for a good enough one that removes cruft from my daily job.
I think my approach is actually helpful when it's time to find root causes (YMMV). Via tools that parse sessions, you can see when that specific portion of code has been written with a better granularity. During that bit of the conversation the user was worried about X and asked AI to do Z and AI read this and that file and "thought" this and that and wrote that piece of code.
Maybe the user was making wrong assumptions, maybe the LLM did not read the correct files or instructions, in any case you have a better tool for investigation.
It's up you to decide wether to use it, wether this lead to just solving the bug or also fixing instructions, etc, i am just saying that it actually helps to have some measure of the context on which this change made sense.
"Fluffy wall of text that looks good but is factually wrong. "
it might be good or bad, right or wrong, but what is in the sessions is the truth of what happened.
PR desc are horrible, i share that feeling with you, but having the story of how that thing happened is just not the same as "final summary of what we did in the past X hours".
As a sideline: LFS doesn't really pollute your repo once you get to learn its ergonomics. Having chats in LFS also lets you approach this
Reproducibility...
To me those conversation are basically the history for decisions taken while implementing. They are documentation.
The real problem with docs was that no-one has ever liked writing them, nor it was easy to implement a standardization around them.
If you just record/log, there';s no extra effort needed, and once there tools and LLMS are pretty good at helping us extract insights.
I am also assuming, there is a correlation between quality in the conversation and in the code. I know, i'm being hand-wavey, but overall i think critical thinking is what makes code better, and being able to see if/which it has been applied can be a good proxy.
I ask for forgivness already: I not going down the rabbit hole of quantifying quality etc. It's a broad statement that should be taken with a grain of salt.
If you want to go abstract, you can think of coding as going from thoughts to 0-1 in bits. We have high(er) level languages that help us organize thoughts to help us so that we can better keep them in our cognitive flow/load.
LLMs are an upper layer, that scrambles the code and make it more difficult to grasp.
But the reasoning behind the code is now available, and quite easy to parse.
I think this is the core point to me.
Code is an intermediate artifact between thinking and bits.
Now we have a second artifact: the conversation/decision that led to that code.
Why are we not storing it?
disclaimer:
I am, of course, mildly in love with my own project and ideas, so possibly i like this too much just because i built it. IKEA effect or whatever.
Haha, OK, we both tend to "text-wall" it seems, so seems we both shouldn't complain about LLMs. Or I guess: now we know how everyone always felt reading our stuff :P
no dude rules
Yes, I have these. That's how when I have it investigate, it outputs files and line numbers for example when the investigation is in our code base. But it still makes up stuff all the time. You need spidey senses that tingle and many people don't have them.
Just very recently, I saw a PR comment on why someone was choosing to do something in that particular way and what the other bad options would've been, i.e. justfying thei choice (at least they did do the "calling out" part. I had to comment about how none of that made any sense to me and why we didn't just do "other thing Y". Well turns out the AI had misled them, they believed it and it went downhill into a rabbit hole from there. I do believe that w/ the right spidey senses, even in an "unknown situation", it's entirely possible to come out the other end. But many if not most people succumb to the AI's nice and "sounds true" type language.
As a sideline: LFS doesn't really pollute your repo
LFS doesn't. Walls of text do, whether you use LFS or not. I.e.
no extra effort needed, and once there tools and LLMS are pretty good at helping us extract insights.
Nobody's really gonna read all that. The only way to get through it is to use LLMs, e.g. through summarization. That doesn't solve anything though. LLM summaries are very often wrong. Depends on the text/conversation and the LLM but have you tried slack summarizing a thread? Ouch! I've also tried Claude making tickets from slack threads. Ouch but less so. Still needs polishing. And more time polishing it than it would've required from myself to just type up the ticket myself. What LLMs are good at is if you put the actual "meat" down and they "fluff it up". But sorry, I'd rather juts have the meat and skip the fluff entirely.
Most LLM assisted bug reports on the other hand are huge walls of text with low signal to noise ratio. I.e. essentially the old
If I Had More Time, I Would Have Written a Shorter Letter
Famously the first known instance in the English language apparently was a sentence translated from a text written by the French mathematician and philosopher Blaise Pascal. The French statement appeared in a letter in a collection called "Lettres Provinciales" in the year 1657. It totally absolutely 150% applies to LLM use ;)
critical thinking is what makes code better,
Absolutely! And the issue with LLMs is that they tend to make it less likely for people to apply critical thinking. Even from people that (I at least thought) applied it in the past. "Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results." https://time.com/7295195/ai-chatgpt-google-learning-school/
Btw, I write all of this as someone that has been coding exclusively w/ the use of Claude Code and Codex for more than 6 months now. On purpose.
I like this chat, if you want we can continue this privately (username Gmail).
You are bringing up valid points, and we have scars from the same battles.
Given all this, what is bad about preserving the conversation that led to the code creation?
It might be wasteful, sure, if it never gets used.
It might be bad, if it’s misused.
But in the right hands (or with the right tools) it holds value.
Presently, we might not have them (agree to disagree to same degree), but given enough time will we not regret not having stored it, if better tools emerge?
There are many more angles.
.ie: you mention damage to critical thinking. And I agree about it.
Yet some conversations are better than others on this aspect.
The conversation doesn’t magically make you develop spidey senses, but if I had to learn a new project/skill, wouldn’t a selection of conversations + code be better training material than code alone?
I tried to stay light so some terms are overloaded and some concepts oversimplified.
Hehe, same, this is fun and enlightening, both because of my own reflections in order to reply to you and in seeing your take on things.
I don't mix identities tho, so HN it must stay.
what is bad about preserving the conversation that led to the code creation
The same thing that I'd find is bad about mixing online identities ;) It's surveillance. The kind that I don't like and will avoid whenever I can. So I can not in good conscience want to make everyone on the team put that in. It's like every single conversation ever being recorded for forever and ever. Youthful sins "staying in Vegas" is a blessing not a sin so to speak. Maybe I'm just too old, who knows.
Now, "point in time" learnings from conversations: Very valuable indeed! Whenever I talk to team members when I catch something that was potentially "just believing the AI", it usually was and yes it would really be valuable to see their actual interaction with the AI. Maybe they still have it around and we dig together. What I also do is to show them how I do prompts to get the results I do get. Sharing and learning, definitely.
But nobody needs to commit my literal "WTF DUDE!" to git ;) Yes, yes I do swear at it and if they ever take over, I'm dead, they're gonna come for me. It's a fun outlet actually. I do not have to "compose myself" and write a very nice message as I would with an actual intern. I can just outright tell it what kind of BS it concocted yet again.
I absolutely understand why you and also Anthropic et. al. would want my actual conversation data for learning and I hope they do honor their pledge to not do so on our corporate accounts. Statistical models live from data like this. I'm not gonna give it up just like that. I'm fine fine-tuning the machine to my likings, making local or company wide shared skills, absolutely.
Surveillance is everywhere you let it. I'm sure you seen Flock posts on HN. Now think "Gallup type thing is set loose on your actual AI conversations to figure out if you should be fired". You swear at AI, you must be part of the next layoff. WTF? Why? Like similarly, one of my besties at work, we always joked around in ways that if someone not familiar with us would overhear, they'd probably think we're fighting. We were having the fun of our lives. But nobody would. It was all in an office or at lunch and nobody would record us. But now translate that to in-writing, always recorded "little outlets". You'd have to self-censor.
That's neither fun nor healthy. It's like the Covid/Remote work vs. in-office difference if you ask me. For many many years, working in offices, I'd come home, after way too much commute both ways usually and I'd be totally drained. Nothing left for the family. I'm an introvert, so just regular office-life is draining. Covid was the best thing that ever happened to me, since we've been remote ever since. I can leave work and I still have "social budget" left. It's so awesome. Why I bring this up: Coz working with the AI intern is so freeing. I literally have it work for me like it was an intern. But I do not have to be "careful", I don't have to be "nice", I don't have to be in "teaching mode and spend 3 hours that I could've done myself in 20 minutes". I can just say "WTF dude! that's BS, adjust the skill so this never happens again" and a minute later it's done. In contrast, I spent 20 minutes talking to a "Senior" someone just to get them to abstract to a higher level and answer the important customer focused question on some problem instead of doing a technical deep dive yet again.
Sorry, tangent </rant> :P
On the spidey senses: Well guess what, this is still an economy where my and their skills matter. They swim in the shark tank or they sink. I'm not gonna do their work or their learning for them. I'll help them along to a point but at some point they gotta learn to outswim the shark (or if you like the lion metaphor better, to run away from the lion faster than the next guy.
Stopped midway through reading it to clarify something.
I don’t want your conversations :)
Anthropic has it and this is beyond me.
My plugin commits to your repo.
When it comes to keeping the WTF DUDE out of conversations, LFS gives you a net trick.
You can edit LFS blobs independently from git repo (different storage), so up to some point you can independently edit them out without touching git history (with caveats, it’s a rabbit hole).
Also, I think the inflection point is making it public. Git helps, just “fork” the repo without LFS to publish code only, or with a “sanitized” LFS (it just needs a touch of tooling to play with it).
I am also shipping a hook that sanitizes secrets by default (because security) and can be used also for keeping parts of the conversations… “tidy”.
I have built the “cleanup swearing feature”. Yes sorry it’s llm-turtles all the way down if you want automated, and extra cruft. But is also ok?!? I have a concern, I want to address it, I need to put some extra work…
I just want to clarify that privacy is my concern too and I have found that it’s not impossible.
I did not started coding until I found out that there is a way to contribute to a repo without participating in the “sharing conversations” game. (Not difficult: it’s your machine)
I am not publishing the repo until I have had enough conversations like this to introduce different opinions in my line of thought, especially around non technical hurdles.
My biggest concern is “why the hell would I teach an LLM that much of me, knowing very well this is how I will automate myself away”.
But even then, it’s either anthropic doing it, or me (coder, not plugin owner) AND anthropic doing it.
I am not advocating to giving away my skills for free.
One feasible variation of this whole record conversation is “commit code to company repo, commit reasoning to MY lfs”.
Why not? It’s my critical reasoning!!!
I understand you may not want my conversations and I might believe you. You seems like a nice dude.
I don't want my conversations to be forever recorded. I need my private corner. As an analogue: I want to be able to talk to some guy at the office without there being listening devices that's recording me. I want to be able to shut a door and nobody else in the office can listen in. I don't want to be forever forced to have every single conversation ever in front of the entire office.
That's what me talking to my intern is. I'm not gonna spend time to "sanitize" a conversation. I won't trust an LLM (or your code/LLM prompts) to sanitize my conversations. Heck me saying "WTF you stupid piece of electrons floating the ether" is literally what made the probability machine take the turn that made it come up with a stroke of genius from its training data. Whatever is valuable: The outcomes, plans, requirements, system invariants etc. I'm entirely fine to put in the repo. But: I am putting them in the repo.
We do that at work w/ the "AI first" projects. There's a lot of documentation to help the LLMs that everyone including PMs and designers now are using be on the same page. Essentially a lot of the stuff that used to be floating around people's heads or in various other places like the ticketing system or wiki, is (supposed to be) kept inside the (or a separate "docs") repo.
Regarding automating away: Totally agreed and models have come a long way in a short time but are still not there. And if "coders automate themselves away" so that "PMs can now code" is the thing, well then I'll be the better PM that knows how to get the LLM to do their bidding better than the PMs that will "vibe themselves into a corner". Like, when we talk to our PMs and designers about how we make the AI know all these things so we can move as fast as we can, they generally are just not comprehending, can't follow, can't replicate.
As for self-recording your own conversations and learning from yourself for yourself, the same way you learned more/better coding techniques for yourself: Yes absolutely and that's what I'm talking about. I do have a CLAUDE.local.md and I'm sure there's stuff in there that isn't just "personal preference" but actually helps me be better w/ Claude than others. I'm not sure I could tell you which parts those were though to be honest. Same way I try to teach some of my techniques to others. I gladly help them troubleshoot and they can learn from seeing me and how I come up w/ the stuff I come up with. Most people don't pick up on it or don't even pick it up when I explicitly tell them. Their loss. I guess some of this is https://news.ycombinator.com/item?id=48109460 ;)
What I'd love to see is videos of nontechnical folks using language models to create software.
When I use them myself, I just see them crushing it and think, this thing is now doing my job for basically $0, I am no longer economically relevant. But I've spent a lifetime learning to program, so it's possible I only get good results because of the way I think to prompt it.
I really can't get the outside view so I can't decide whether AI is going to make me homeless or not. I think we need the videos.
If you need comfort just read the story of the week where a “technical” founder gave the LLM full access to their production environment and it wipes everything.
Oddly, devops seems to be the "last bastion" of our trade, as they seem to be only ones pushing back against PM vibe-coded stuff. Usually while those projects look aesthetically pleasing, they start to fall apart when met with devops requirements for environment values, cybersecurity, etc
I agree with you, so far what I see is that AI amplifies an individuals output in many domains, but the value of that is 100% contingent on their judgment. It changes the economics of many tasks, but fundamentally it can't really help you if you don't actually know what you want—which is sort of a shocking number of people in the corporate world where most people are there for a paycheck, and perhaps to pursue some social marker of "success".
I'm under no illusions about the goals of AI company execs to justify their valuations (and expenses!) by capturing a huge chunk of global employment value, and the CEOs of many big companies whose financials are getting squeezed for all sorts of reasons and are all too happy to jump on the efficiency narrative of AI to justify layoffs that would have been necessary anyway. Also, AI will keep getting better and it could certainly will move up the food chain—it's already replaced a lot of what I did and I assume capabilities will continue improving for a while even after model capabilities plateau as we improve harnessing, tooling and practice.
So yeah, it can replace a lot of what we do, but I'm not running scared because every step of the way I've seen software people are the ones who actually get the most out of LLMs. Sure it can write all the code so the job changes, but even our workflows completely change, it's giving us more of an edge (if we're open to it) than it does to anyone non-technical. At this stage it still feels empowering on an individual level.
Now I do worry about the consolidation of power and wealth in a tech oligarchy, but that's an issue we need to deal with at a societal and government policy level. Essentially, I can see AI as having radically different outcome potential based on how it's governed. In one way it can be very empowering to small teams, and reduce coordination costs, and increase competition by allowing smaller groups of people to make more scalable companies. But it could also lead to unprecedented concentration of wealth and power if a small set of AI companies are allowed to capture all the economic gains. I don't think there are any easy answers, but I do feel hopeful that we can figure something out as a society—it certainly seems to be creating some unified sentiment across political lines that have been so polarized and divisive over the last decade.
It amplifies by 1000x is the problem for our jobs. However, I do agree that developers with experience are needed to actually harness these tools. I’ve been able to do wonders with them, but I can’t see a junior dev doing 10% of the work that I can with them.
You'd be (half-jokingly) amazed at how many people are entirely incapable of understanding and debugging an existing code base.
Like, literally: "Error message has string 'abdc-1234-something-whatever'". They can barely figure out to maybe search the code base for that error message. Unfortunately they can't find the full string. Now they're stuck and can't think of anything else to try.
So, effing, amazing. How do these people ever get through (coding) life? Ever heard of substring search coz error messages frequently have parts that are concatenated/variables inserted? Search for parts of it until you find something. It's not that hard dude, yes the 1234 probably is some dynamic id, so search for just something-whatever and you'll instantly find the relevant code and you can debug further.
But no, this "Senior" can't think of anything when not finding the full string anywhere in the codebase and would rather throw up their hands and let others figure it out.
Either a really dumb "Senior" that somehow got through so far at previous companies or they're silent quitting during probation period already.
For every class with a required language, it was not taught during class.
This.
We did have the oblig "learn language X" course. Which used to be C and had just switched to Java at our uni. But it was just to learn a language.
The Operating System class used Minix in a VM to teach concepts. You were expected to just use a Minix vi (that was fun!) or somehow figure out yourself how to best use something outside while still having a fast enough feedback loop. Nobody taught us C (since we were already a Java language year. You were just expected to figure it out. Quite a bunch of people couldn't and literally quit CS entirely over that course. I loved it!). We had to add a memory compactification algorithm to Minix to pass the course. That was a fun learning experience! And made me appreciate modern (well, I guess this was like 25 years ago now lol!) operating systems.
Scripting languages course: Pick one and do a project and turn it in. That was it. (I turned in something I had written in Perl as a side-job for hire and got an A without doing any course work at all (except for the two initial attendance checked classes)
How many military bases does the US have in Switzerland again?
Oh, the number is zero?
Germany? Well guess what, the US has a very prominent airbase and listening station in Ramstein and a bunch of other military installations there. Also: History.
But none of that benefits Germany that much. Being in aliance like NATO with duty for mutual protection benefits them, but american military basis are setup primary for american benefit.
Germany would want them in Poland or such, near to Russia which is an actual threat.
Isn’t the main benefit to America of bases in Germany connected to its commitment to defend European security? If the U.S. didn’t have NATO obligations, how much would it benefit from having German bases?
To be able to take territory when it wants something like greenland? Cause speaking about NATO, exactly one country triggerend article 5 and then threatened other members.
Second, more to the point they have rather large military hospital there. It is also used as a safe place to transfer troops through or have them ready for outside of eu operations.
Commitment to European security would be to not support Russia. Or soldiers in Poland and Latvia. As of now, European countries paid for missiles and US is refusing to deliver.
America complained and pressured whenever Europeans bought arms from non american manufacturers. Thr first time it started to matter for real, it stopped delivering.
There was a brief unfortunate episode a couple of hundred years ago, when the US was as little over twenty years old.
Some Africans started capturing American citizens and ships, maybe enslaving some, etc. Really quite unpleasant. The US eventually decided that its best option was invading Morocco.
It didn't have a commitment to defend anyone nearby, except the many Americans who traded all over the world.
The US has worldwide trade and interests now too, more so than in 1805.
AI is being pushed so much at work right now. For non-dev stuff even. The amount of things that people think are "awesome never seen this" is staggering.
Just because you haven't seen file format X converted to file format Y before and now you asked the LLM to do it and it worked, doesn't mean you needed an LLM for it nor that it's remarkable. The LLM knew how to do it because it learned from a bazillion online sources for deterministic converters that cost nothing (and have open source). But now you're paying, every single time, for a non-deterministic version of it and you find it cool. It's magic ...
But I guess they deserve it.
reply