Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Good question. We are betting that the cost per LLM inference will continue to go down for given level of performance as the market matures over the next few years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: