I did it all by hand in HTML canvas. I have spent a long time making such things for years. My day job is making a canvas-based Diagramming library so I have some practice.
If you click on it the scene creates more objects by the way. And if you right-click and drag you can move them around.
We want to speed up adoption of custom AI, but most people suck at building it (no expertise, money, time, etc.).
We thought, what if you could "Vibe ML" your way to it? Allow any AI engineer or PM to build custom AI directly from their current implementation.
So we built these agents that orchestrate the entire life-cycle of custom AI. We start by hooking into how you use AI, prepare/label your data, detect the best recipes for your task, fine-tune, and deploy it for you. Really tried to simplify the entire process.
We aren't entirely sure about the UX/UI patterns. We aren't going chat first because if most people don't know where to start with ML, how in the world are they going to prompt it!?! Instead, we auto detect the AI tasks you've built and go from there.
Yeah, it will halt and ask for info if it decides it needs to. However, it won't remember the information you put in that modal BUT if you're using it to log into a system for example, and it successfully logs in, it will stay logged in for the next run.