Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are always ways to trade precision for speed in computer statistics models.


Sure, generally speaking. Is that true for static, fixed parameter count LLMs like GPT4?

I think you're hand waving a lot just to claim that OpenAI are (somehow) reducing accuracy of their models during high load. And I'm not sure why.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: