Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>A reasonable answer is that the AI's output tends to involve this running-together of common rhetorical devices along with false and/or contradictory claims within them.

The question here is this an actual AI only failure mode. Are we detecting AI, or just bullshittery?



I don't know if bullshittery is the only failure mode but I think it's a necessary failure mode of large language models as they are currently constituted.

I would say that human knowledge involves a lot of the immediate structure of language but also a larger outline structure as well as a relation to physical reality. Training on just a huge language corpus thus only gets partial understanding of the world. Notably, while the various GPTs have progressed in fluency, I don't think they've become more accurate (somewhere I even saw a claim they say more false thing now but regardless, you can observe them constantly saying false things).


Gotta be honest, I wouldn't mind throwing out bullshittery with the AI that much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: