Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> So it’s several rounds of “yes yes, the previous time the criticisms were right, but this time it’s different, trust me”. So everyone else is justified in being skeptical.

True, but even the boy who cried wolf too many times eventually got his sheep eaten by the wolf.

I have my own personal anecdotal benchmarks and I never hyped LLMs before GPT-5.

Things that simply did not work before GPT-5 no matter how many shots I gave them, GPT-5 breezed through.

For me, it would take at least 2 generations of no felt progress in the models to call for diminishing returns, and I'm not seeing them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: