Hacker Newsnew | past | comments | ask | show | jobs | submit | sudb's commentslogin

LLMs have absolutely made my mechanical ability to write code much worse day-to-day. I'm still not sure if this is a good thing or not.

For you, no. For the services you depend on and will continue receiving your data, and may jack up the prices/add limitations knowing that your dependency won't be easily broken, yes.

I thought it was against Slack's ToS to exfiltrate data like this?

Also surely most of a startup's Slack activity is just fluff - is there some amount of preprocessing the AI companies have to do, I wonder.


How is the ToS relevant when the company is already bankrupt (IANAL)? Slack can cancel the customer-relationship with the bankrupt company, but that's it, no?

huh, I always assumed they were metal-clad objects with something inside

wikipedia tells me they are machines, but not what they're made of


As cool as this technically is - who is the target market for this? I think people building coding agents and coding agent platforms are for the most part building on non-Cloudflare sandboxes, and can tolerate minutes of latency for setup.

I am not sure what people who roll their own in-house solutions for coding agents do, but I suspect that the easy path is still one of the many sandbox providers + GitHub.

I would love to find out who would use this & why!


You can always have a dynamic pool based on your load of ready to use docker containers, then it takes like 15 seconds and is faster than basically any sandbox provider.

yeah absolutely! the tricky bit here is predicting/forecasting load though - possible for many applications but not all

lets just say with Artifacts you could create millions of repos every day, one for each agent/chat/user/session.

Its all durable objects :)


I think this is a great idea in general - security through obfuscation, kinda.

What problem is it that you are confused isn't solved?

I think the codec analogy is neat but isn't the codec here llama.cpp, and the models are content files? Then the equivalent of VLC are things like LMStudio etc. which use llama.cpp to let you run models locally?

I'd guess one reason we haven't solved the "codec" layer is that there doesn't seem to be a standard that open model trainers have converged on yet?


llama.cpp is the ffmpeg/libavcodec equivalent in this story.

seems pretty unrelated to the post?

also you might be the only person in the wild I've seen admit to this


I really like the simplicity of this! What's retrieval performance and speed like?

Minimalism is my design philosophy :-)

Good question. Since it is just an LLM reading files, it depends entirely on how fast it can call tools, so it depends on the token/s of the model.

Haven't done a formal benchmark, but from the vibes, it feels like a few seconds for GPT-5.4-high per query.

There is an implicit "caching" mechanism, so the more you use it, the smoother it will feel.


I think the post author is mainly addressing self-hostable and/or open-source options here - otherwise I'd expect a whole host of other commercial storage providers to have been mentioned!

or cloudflare R2 for that matter (very useful for egress-heavy workloads for which it is ~free)

I was curious why this didn't come up in the article

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: