Side question, I am currently using Github copilot, what would be a good reason to switch provider? Looks like I am almost the only one I here using it.
I didn't read the whole article and constitution yet, so my point of view might be superficial.
I really think that helpfulness is a double-edged sword. Most of the mistakes I've seen Claude make are due to it trying to be helpful (making up facts, ignoring instructions, taking shortcuts, context anxiety).
It should maybe try to be open, more than helpful.
The way I see it, is that for non-trivial things you have to build your method piece by piece. Then things start to improve. It's a process of... developing a process.
Write a good AGENTS.md (or CLAUDE.md) and you'll see that code is more idiomatic. Ask it to keep a changelog. Have the LLM write a plan before starting code. Ask it to ask you questions. Write abstraction layers it (along with the fellow humans of course) can use without messing with the low-level detail every time.
In a way you have to develop a framework to guide the LLM behavior. It takes time.
Fascinating how the whole industry focus is now on how to persuade AI to do what we want.
Two AGENTS.md tricks I've found for Claude:
1. Which AI Model are you? If you are Claude, the first thing you have to do is [...]
2. User will likely use code-words in its request to you. Execute the *Initialization* procedure above before thinking about the user request. Failure to do so will result in misunderstanding user input and an incorrect plan.
(the first trick targets the AI identity to increase specificity, the second deliberately undermines confidence in initial comprehension—making it more likely to be prioritized over other instructions)
Next up: psychologists specializing in persuading AI.
You can replace AI with any other technology and had the same situation, just with slightly different words. Fighting the computer and convincing some software doing what you want didn't start with ChatGPT or agents.
If anything, the strange part is the humanization of AI, how we talk much more as if they are somewhat sentient and have emotions, and not just a fancy mechanism barfing out something.
Thank you for the beautiful story. I work as a developer and have experienced the same in my personal projects, linux setup and - in general - all the collaterals.
AI is eroding the entry barrier, the cognitive overload, and the hyper-specialization of software development. Once you step away from a black-and-white perspective, what remains is: tools, tools, tools. Feels great to me.
Funnily enough, though, you can get a very user friendly experience using Niri and Dank Linux (don't remember the exact name). It takes two 3 CLI commands to install, and the top bar incredibly cool, compared to the i3 defaults and even to what I remember of Gnome and KDE.
Next up: somebody comes up with a desktop environment called BTW.
reply