Oh, so many things. I guess that’s both the blessing and the curse of agentic ai today.
The most fun is a simple Claude Code in a loop, Boucle, which builds and iterates on its own framework[0][1].
The first thing it built was a persistent memory. Now it has finally built itself a "self-observation engine" after countless nudging attempts. Exploring, probing, and trying to push back the limits of these models is pure chaos, immensely frustrating, but also fun.
Aside from that, some sort of agent harness I guess we call them? Putting together a "system" / "process" with automated reviews to both steer agents, ground them (drift is a huge pain), and somehow ensure consistency while giving them enough leeway to exploit their full capabilities. Nothing ready to share yet, but I feel that without it I’ll just keep teetering on the edge of burnout.
I’ve been running a Claude Code "thing" in a loop for a few days, and that has been extremely frustrating.
But after tons of nudging it has started developing a sort of "improvement engine", as it calls it, for itself to help address that.
It go through its own logs and sessions, documents and keep track of patterns and signals, associated strategies, then regularly evaluate their impacts, independently of the agent itself, and it feeds those back to it in each loop.
I got it to build a stereoscopic Metal raytracing renderer of a tesseract for the Vision Pro in less than half a day.
It surprisingly went at it progressively, starting with a basic CPU renderer, all the way to a basic special-purpose Metal shader. Now it’s trying its teeth at adding passthrough support. YMMV.
It’s funny. The whole “review intent", "learning" from past mistakes, etc, is exactly what my current set up does too. For free. Using .md files said agents generate as they go.
It's been a while since I've Swifted but it was mostly with combinations of nested generics, lambdas, and literals. Annotating the type of the variable those were assigned to could improve performance a lot.
You don't need to add an annotation on `let foo = bar()` where bar is a function that returns `String` or whatever.
Trying to make a .xcframework from SPM is "fun". And getting the Objective-C that are generated, but treated as intermediary artifacts discarded is a bit of a pain.
But I guess the issue here is that .xcframework is an Apple thing, not a Swift thing.
The whole "won’t build if the product / target has the same name as one of its classes" is absolutely ridiculous on the other hand.
I've been considering adding a review gate with a reviewing model solely tasked with identifying gaps between the plan and the implementation.
reply