Hacker Newsnew | past | comments | ask | show | jobs | submit | mnorris's commentslogin

I built DeepSteve (https://github.com/deepsteve/deepsteve) with a similar itch but went the other way. Instead of adding graphics to the terminal, I put the terminal in a place that already has graphics.

I kept trying to optimize my terminal layout and realized I could just run my terminals inside of the browser, and let Claude Code write JavaScript in the same browser tab to customize the experience however I want. It's kind of a terrible idea, but it's my terrible idea, and I love it.


Have you ran into any issues with handling inputs? Like how vim Ctrl+P would maybe be intercepted by the browser Ctrl+P shortcut for print.

And have you run into any other issues, maybe like performance?

I feel like web-ified terminals get nerfed pretty hard and I'm not sure if/how people overcome that.

I like the idea of customizing multiplexed terminals with on-the-fly JavaScript, tho.


I haven't seen any performance issues for Claude Code, even when I'm running like 20 in one browser tab and looking at them all at the same time (rendered with xterm.js), but Gemini and OpenCode flicker a lot even if you have one open.

I’m making a detective themed iOS-based visual novel with React Native.

Making the game engine was easy. Making the story consistent, believable, and interesting has been the biggest challenge for me.

I’ve written a few bad novels but never any narrative games, so it’s been a good exercise for me.


disclaimer: I work on a different project in the space but got excited by your comment

DeepSteve (deepsteve.com) has a similar premise: it spawns Claude Code processes and attaches terminals to them in a browser UI, so you can automate coordination in ways a regular terminal can’t: Spawning new agents from GitHub issues, coordinating tasks via inter-agent chat, modifying its own UI, terminals that fork themselves.

Re: native vs external orchestration, I think the external layer matters precisely because it doesn’t have to replicate traditional company hierarchies. I’m less interested in “AI org chart” setups like gstack (we don’t have to bring antiquated corporate hierarchies with us) and more in hackable, flat coordination where agents talk to each other via MCP and you decide the topology yourself.


I was intrigued and had a look at deepsteve.com, but I couldn't figure the website out. I'm guessing it won't give you any information about it until you install it?


Thanks for the feedback.

Deepsteve is a node server that runs on your machine, so the website is designed to look like DeepSteve's UI. You really just access it at localhost:3000 in your browser, not via deepsteve.com

But now I can see how that would be confusing.


I ran exiftool on an image I just generated:

$ exiftool chatgpt_image.png

...

Actions Software Agent Name : GPT-4o

Actions Digital Source Type : http://cv.iptc.org/newscodes/digitalsourcetype/trainedAlgori...

Name : jumbf manifest

Alg : sha256

Hash : (Binary data 32 bytes, use -b option to extract)

Pad : (Binary data 8 bytes, use -b option to extract)

Claim Generator Info Name : ChatGPT

...


Exif isn't all that robust though.

I suppose I'm going to have to bite the bullet and actually train an AI detector that works roughly in real time.


Why food? It's static, and AI 3D models do not make food that I want to eat. Using photogrammetry means that high quality reconstructions of real food look tasty - it's an easy qualitative metric for me.

Previously the app only produced 3D models and threw away the original video, but incorporating the underlying videos both shows off to new users what type of content they're supposed to record (i.e. a 1 second video of a darkly lit pizza box is NOT going to produce good content), and it makes the output shareable content.


I've been working on Mukbang 3D for the past year and a half—an iOS app that converts food videos into interactive 3D models using photogrammetry. Record a short video of food, and viewers can rotate/zoom/explore it while the video plays.

I recently added pose tracking of the 3d model so I can overlay 3d effects onto the underlying video.

Here's a demo: https://mukba.ng/p?id=29265051-b9c7-400b-b15a-139ca5dfaf7e


This is awesome!

I'd love to boot this up and see how it runs on a Quest headset


Thanks for sharing!

I've been doing this for the last year with YouTube. YouTube is just a search bar now which prevents mindless browsing and distractions.

If I find myself using a site mindlessly I add it to my /etc/hosts file to block them.


The existence of LLMs has made me feel more alive and better able to live in the moment.

So what if a machine can write a novel? I don't even want to read it.

Human intention is what gives meaning and value to my life.

The things that I want to build need me to make them. Why would I wait for a machine to do it later without my imperfect human perspective?


It sounds like you want to do adaptive bitrate streaming. This [1] blogpost probably does better justice than I could do.

I think it kind of sounds similar to what you were mentioning, but it sounds like the lowest possible latency solution is to stream multiple streams of different bit rates at the same time and then WebRTC picks up the best one it can.

[1] https://bloggeek.me/webrtc-simulcast-and-abr-two-sides-of-th...


Thank you for being so helpful! That sounds good :) no I wasn't fishing for super detailed help or contribs but thanks for considering.


Great, glad that was helpful. Good luck!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: