Is not the purpose of a model to interpolate between two points? This is the underlying basis of "hallucinations" (when that works out /not/ in our favour) or "prediction" (when it does). So it's a matter of semantics and a bit of overuse of the term "hallucination". But the model would be useless as nothing more than a search engine if it were to just regurgitate it's training data verbatim.