Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(Shrug) Ancient models are ancient. Please provide specific examples that back up your point, not obsolete .PDFs to comb through.


Your ideas are quite weak and you ask for overwhelming proof, but not willing to read any research. That’s just intellectually lazy.

Perhaps if you took some time to learn from the experts, those who create these systems and really understand what’s happening you would realize these limitations in AI are widely known.

Take a look around the 5 minute mark.

https://youtu.be/PqVbypvxDto?si=gZq-2yEuE4sTeQZe

Just understand you are dead wrong in your assumptions.


You appear to be arguing with someone who isn't here (or else you replied to the wrong post.) Your personal fallacy of choice appears to be, "LLMs aren't godlike and infallible only a few years after being invented, despite absolutely no one ever claiming they were, so it's all a bunch of empty hype."

No one cares about the state of the art. Only the first couple of time derivatives matters. You're not getting smarter, but the models are.

How are those examples coming along, by the way? The ones that prove that IMO-level models aren't reasoning, but just getting really, really lucky?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: