For you, no. For the services you depend on and will continue receiving your data, and may jack up the prices/add limitations knowing that your dependency won't be easily broken, yes.
How is the ToS relevant when the company is already bankrupt (IANAL)? Slack can cancel the customer-relationship with the bankrupt company, but that's it, no?
As cool as this technically is - who is the target market for this? I think people building coding agents and coding agent platforms are for the most part building on non-Cloudflare sandboxes, and can tolerate minutes of latency for setup.
I am not sure what people who roll their own in-house solutions for coding agents do, but I suspect that the easy path is still one of the many sandbox providers + GitHub.
I would love to find out who would use this & why!
You can always have a dynamic pool based on your load of ready to use docker containers, then it takes like 15 seconds and is faster than basically any sandbox provider.
What problem is it that you are confused isn't solved?
I think the codec analogy is neat but isn't the codec here llama.cpp, and the models are content files? Then the equivalent of VLC are things like LMStudio etc. which use llama.cpp to let you run models locally?
I'd guess one reason we haven't solved the "codec" layer is that there doesn't seem to be a standard that open model trainers have converged on yet?
I think the post author is mainly addressing self-hostable and/or open-source options here - otherwise I'd expect a whole host of other commercial storage providers to have been mentioned!
reply