Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given that OpenAI were THEMSELVES surprised by how even GPT-3 ended up, it’s always funny to see HN know-it-alls pipe up with all the answers.

These sorts of poorly formed faux-philosophical arguments against LLMs have become the new domain of people that confuse blindly acting skeptical with actual intelligence.

Ironic.

This latest generation of AI quite rightfully raises questions and challenges assumptions about what it means to be intelligent. It quite rightfully challenges our assumptions about what can be accomplished with language. And, thank God, it quite rightfully challenges assumptions many have made about what sets humanity apart from everything else.



> poorly formed faux-philosophical arguments against LLMs

There's a misunderstanding here. The post you're replying to is not an argument against LLMs. It's an argument about what LLMs can and cannot do, what their fundamental capabilities are, and so forth.

It's very clear that if you need a system to provide answers based on a substantial body of human writing, LLMs are totally awesome. But that doesn't mean, in and of itself, that they can X or that they can Y.


> Given that OpenAI were THEMSELVES surprised by how even GPT-3 ended up,

Yeah and they have 0 incentive to overhype their takes. OpenAI has already slanted already impressive data in the past to make it more "hype building" for the general public, when a more scientific study style reading is "this is really cool, here's where it still fails". I am very confident shit like that is the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: