How can it set the system prompt when using Claude Code, when even Claude code itself doesn’t support adding to the system prompt? It does have “—appendSystemPrompt” but despite its name that’s actually just a user message being sent upon startup.
If you let an LLM generate it (e.g. Claude's /init), it'll be a lot more verbose then it needs to be, which wastes tokens and deemphasizes any project-specific preferences you actually want the agent to heed.
I really don’t understand what’s wrong with people using LLMs for these types of mundane conversations. There’s nothing to gain and it destroys value of online discourse.
I don't think anyone is using LLMs for those conversations. A lot of those replies are bots. There's a market for reddit accounts that have a solid human-looking reply/post history, to be used for astroturf marketing, so some organizations set up bots to grow such accounts. There probably are also just people who overuse "Honestly? [statement]" sentences. I've spoken to such people in person before LLMs.
> If you ask to unify the duplication, it'll say "No problem, here's a brand new metamock abstract adapter framework that has a superset of all feature sets, plus two new metamock drivers for the older and the newer code! Let me know if you want me to write tests for the new adapters."
Nevermind the fact that it only migrated 3 out of 5 duplicated sections, and hasn’t deleted any now-dead code.
It's not reality. I'm really not a fan of the way that people excuse the really terrible code LLMs write by claiming that people write code just as bad. Even if that were true, it is not true that when you ask those people to do otherwise they simply pretend to have done it and forget you asked later.
Yes and both are right. It’s a matter of which is working as expected and making fewer mistakes more often. And as someone using Claude Code heavily now, I would say we’re already at a point where AI wins.
> it is not true that when you ask those people to do otherwise they simply pretend to have done it and forget you asked later.
I had a coworker that more or less exactly did that. You left a comment in a ticket about something extra to be done, he answered "yes sure" and after a few days proceeded to close the ticket without doing the thing you asked. Depending on the quantity of work you had at the moment, you might not notice that until after a few months, when the missing thing would bite you back in bitter revenge.
You may have had one. It clearly made a pretty negative impression on you because you are still complaining about them years later. I find it pretty misanthropic when people ascribe this kind of antisocial behavior to all of their coworkers.
It's still relatively recent. Anyway I'm not saying everyone is like this, absolutely (not even an important chunk), but they do exist.
At the same time it's not true that current LLMs only write terrible code.
"Even if that were true, it is not true that when you ask those people to do otherwise they simply pretend to have done it and forget you asked later."
The point is, that's not the typical experience and people like that can be replaced. We don't willingly bring people like that on our teams, and we certainly don't aim to replace entire teams with clones of this terrible coworker prototype.
Not only have i never had a coworker as bad as these people describe, the point is as you say: why would I want an LLM that works like these people's shitty coworkers?
My worst coworkers right now are the ones using Claude to write every word of code and don't test it. These are people who never produced such bad code on their own.
So the LLMs aren't just as bad as the bad coworkers, they're turning good coworkers into bad ones!
Couple of reasons, but mainly speed and avaiability.
I can give Claude a job anytime and it will do it immediately.
And yes, I will have to double check anything important, but I am way better and faster at checking, than doing it myself.
So obviously I don't want a shitty LLM as coworker, but a competent one. But the progress they made is pretty astonishing and they are good enough now that I started really integrating them.
In the long run, good code makes everyone much happier than code that is bad because people are being "nice" and letting things slide in code review to avoid confrontation.
Maybe, but it lets them pump out much, much more code than they otherwise would have been able to. That's the "100x" in their AI productivity multipliers.
what do you mean nobody is talking about tool schema bloat. everybody is talking about it, and why it’s the general recommendation to just use CLI whenever possible.
reply