Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know if that example is real, but if it is, that's exactly the reason I find AI tools irritating. You do not need six different ways to handle the connection being down, and if you do, you should really factor that out into a connection management layer.

One of my big issues with LLM coding assistants is that they make it easy to write lots & lots of code. Meanwhile, code is a liability, and you should want less of it.





These aren't 6 different way.

You are talking about something like network layers in graphql. That's on our roadmap for other reasons(switching api endpoints to digital ocean when our main cloudflare worker is having an outage), however even with that you'll need some custom logic since this is doing at least two api calls in succession, and that's not easy to abstract via a transaction abstraction in a network layer(you'll have handle it durably in the network layer like how temporal does).

Despite the obvious downsides we actually moved it from durable workflow(cf's take of temporal) server side to client since on workflows it had horrible and variable latencies(sometimes 9s v/s consistent < 3s with this approach). It's not ideal, but it makes more sense business wise. I think many a times people miss that completely.

I think it just boils down to what you are aiming. AI is great for shipping bugfixes and features fast. At a company level I think it also shows in product velocity. However I'm sure very soon our competitors will catch up when AI skepticism flatters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: