Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve come to the opposite conclusion personally - AI model inference requires burst compute, which particularly suits cloud deployment (for these sort of applications).

And while AIs may become more compute-efficient in some respects, the tasks we ask AIs to do will grow larger and more complex.

Sure you might get a good image locally but what about when the market moves to video? Sure chat GPT might give good responses locally, but how long will it take when you want it to refactor an entire codebase?

Not saying that local compute won’t have its use-cases though… and this is just a prediction that may turn out to be spectacularly wrong!



Ok but yesterday I was on a plane coding and I wouldn’t have minded having GPT4 as it is today available to me.


Thanks to Starlink, planes should have good internet soon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: