I don't know if bullshittery is the only failure mode but I think it's a necessary failure mode of large language models as they are currently constituted.
I would say that human knowledge involves a lot of the immediate structure of language but also a larger outline structure as well as a relation to physical reality. Training on just a huge language corpus thus only gets partial understanding of the world. Notably, while the various GPTs have progressed in fluency, I don't think they've become more accurate (somewhere I even saw a claim they say more false thing now but regardless, you can observe them constantly saying false things).
I would say that human knowledge involves a lot of the immediate structure of language but also a larger outline structure as well as a relation to physical reality. Training on just a huge language corpus thus only gets partial understanding of the world. Notably, while the various GPTs have progressed in fluency, I don't think they've become more accurate (somewhere I even saw a claim they say more false thing now but regardless, you can observe them constantly saying false things).