The GPT models, in my experience, have been much better for backend than the Claude models. They're much slower, but produce logic that is more clear, and code that is more maintainable. A pattern I use is, setup a Github issue with Claude plan mode, then have Codex execute it. Then come back to Claude to run custom code review plugins. Then, of course review it with my own eyes before merging the PR.
My only gripe is I wish they'd publish Codex CLI updates to homebrew the same time as npm :)
Interesting, I have consistently found that Codex does much better code reviews than Claude. Claude will occasionally find real issues, but will frequently bike shed things I don't care about. Codex always finds things that I do actually care about and that clearly need fixing.
This is solvable at the level of an individual developer. Write your own benchmark for code problems that you've solved. Verify tests pass and that it satisfies your metrics like tok/s and TTFT. Create a harness that works with API keys or local models (if you're going that route).
At the developer level all my LLM use is in the context of agentic wrappers, so my benchmark is fairly trivial:
Configure aider or claude code to use the new model, try to do some work. The benchmark is pass/fail, if after a little while I feel the performance is better than the last model I was using it's a pass, otherwise it's a fail and I go back.
Building your own evaluations makes sense if you're serving an LLM up to customers and want to know how it performs, but if you are the user... use it and see how it goes. It's all subjective anyway.
> Building your own evaluations makes sense if you're serving an LLM up to customers and want to know how it performs, but if you are the user... use it and see how it goes. It's all subjective anyway.
I'd really caution against this approach, mainly because humans suck at removing emotions and other "human" factors when judging how well something works, but also because comparing across models gets a lot easier when you can see 77/100 vs 91/100 as a percentage score, over your own tasks that you actually use the LLMs for. Just don't share this benchmark publicly once you're using it for measurements.
So what? I'm the one that's using it, I happen to be a human, my human factor is the only one that matters.
At this point anyone using these LLMs every day have seen those benchmark numbers go up without an appreciable improvement in the day to day experience.
> So what? I'm the one that's using it, I happen to be a human, my human factor is the only one that matters.
Yeah no you're right, if consistency isn't important to you as a human, then it doesn't matter. Personally, I don't trust my "humanness" and correctness is the most important thing for me when working with LLMs, so that's why my benchmarks focus on.
> At this point anyone using these LLMs every day have seen those benchmark numbers go up without an appreciable improvement in the day to day experience.
Yes, this is exactly my point. The benchmarks the makers of these LLMs seems to always provide a better and better score, yet the top scores in my own benchmarks have been more or less the same for the last 1.5 years, and I'm trying every LLM I can come across. These "the best LLM to date!" hardly ever actually is the "best available LLM", and while you could make that judgement by just playing around with LLMs, actually be able to point to specifically why that is, is something at least I find useful, YMMV.
DAOs are fixing this in the crypto world. You contribute to the protocol and get paid by the DAO. Everything is transparent and open. If you do this enough and earn the respect of the dev team it could even turn into a full-time role.
Yupp! I made a little toy project for the EVM over a year ago with this exact concept, never really did anything with it sadly, life seems to find ways to get busy. Due to the nature of needing to send 'gas' to make function calls, it was a natural fit to add a call to send a small portion of the value to an address before returning the computations result.
I really loved the idea of being able to create libraries of code that could just be called for a small fee or copied for free if one didn't have the funds. I hope this idea continues to catch on, it seemed to me to be a perfect incentive fit for the open source world.