All my childhood I dreamed of a magic computer that could just tell me straightforward answers to non-straightforward questions like the cartoon one in Courage the Cowardly Dog. Today it's a reality; I can ask my computer any wild question and get a coherent, if not completely correct, answer.
You are in for a rude awakening when you realize that those answers tend to be subtly to blatently wrong especially when the questions are tricky and non-obvious. Once that blind initial trust is shattered and you stary to question the accuracy of what AI gives you back, you see the BS everywhere.
Of course due diligence and validation is needed if you intend to use the information for something, but if your aim is just to satisfy your curiosity with questions you don't have anyone else to ask, it's a great medium.
Is it though? Is seeing a wrong answer really better than to be left wondering?
There is enough unintentional half-knowledge (and intentional misinformation) out there. We don't need to make that worse by introducing BS generators just to "satisfy curiosity".
All my childhood I dreamed of a magic computer that could just tell me straightforward answers to non-straightforward questions like the cartoon one in Courage the Cowardly Dog. Today it's a reality; I can ask my computer any wild question and get a coherent, if not completely correct, answer.