I built DeepSteve (https://github.com/deepsteve/deepsteve) with a similar itch but went the other way. Instead of adding graphics to the terminal, I put the terminal in a place that already has graphics.
I kept trying to optimize my terminal layout and realized I could just run my terminals inside of the browser, and let Claude Code write JavaScript in the same browser tab to customize the experience however I want. It's kind of a terrible idea, but it's my terrible idea, and I love it.
I haven't seen any performance issues for Claude Code, even when I'm running like 20 in one browser tab and looking at them all at the same time (rendered with xterm.js), but Gemini and OpenCode flicker a lot even if you have one open.
disclaimer: I work on a different project in the space but got excited by your comment
DeepSteve (deepsteve.com) has a similar premise: it spawns Claude Code processes and attaches terminals to them in a browser UI, so you can automate coordination in ways a regular terminal can’t: Spawning new agents from GitHub issues, coordinating tasks via inter-agent chat, modifying its own UI, terminals that fork themselves.
Re: native vs external orchestration, I think the external layer matters precisely because it doesn’t have to replicate traditional company hierarchies. I’m less interested in “AI org chart” setups like gstack (we don’t have to bring antiquated corporate hierarchies with us) and more in hackable, flat coordination where agents talk to each other via MCP and you decide the topology yourself.
I was intrigued and had a look at deepsteve.com, but I couldn't figure the website out. I'm guessing it won't give you any information about it until you install it?
Deepsteve is a node server that runs on your machine, so the website is designed to look like DeepSteve's UI. You really just access it at localhost:3000 in your browser, not via deepsteve.com
Why food? It's static, and AI 3D models do not make food that I want to eat. Using photogrammetry means that high quality reconstructions of real food look tasty - it's an easy qualitative metric for me.
Previously the app only produced 3D models and threw away the original video, but incorporating the underlying videos both shows off to new users what type of content they're supposed to record (i.e. a 1 second video of a darkly lit pizza box is NOT going to produce good content), and it makes the output shareable content.
I've been working on Mukbang 3D for the past year and a half—an iOS app that converts food videos into interactive 3D models using photogrammetry. Record a short video of food, and viewers can rotate/zoom/explore it while the video plays.
I recently added pose tracking of the 3d model so I can overlay 3d effects onto the underlying video.
It sounds like you want to do adaptive bitrate streaming. This [1] blogpost probably does better justice than I could do.
I think it kind of sounds similar to what you were mentioning, but it sounds like the lowest possible latency solution is to stream multiple streams of different bit rates at the same time and then WebRTC picks up the best one it can.
I kept trying to optimize my terminal layout and realized I could just run my terminals inside of the browser, and let Claude Code write JavaScript in the same browser tab to customize the experience however I want. It's kind of a terrible idea, but it's my terrible idea, and I love it.
reply