Specifically, "hallucinations" are very common in humans; we usually don't call it "making things up" (as in, intentionally), but rather we call it "talking faster than you think" or "talking at the speed of thought".
Yeah an LLM is basically doing what you would do with the prompt “I’m going to ask you a question, give your best off the cuff response, pulling details entirely from memory without double-checking anything.”
Then when it gets something wrong we jump on it and say it was hallucinating. As if we wouldn’t make the same mistakes.
It’s not like that at all. Hallucinations are complete fabrications because the weights happened to land there. It has nothing to do with how much thought or double checking there is.
You can trick an LLM into “double checking” an already valid answer and get it to return nonsense hallucinations instead.
Which is pretty much what LLMs do.