It's not displaying back the "content". It's training a model with statistics based on the writing that was either paid for by a site publisher in the hope of earning ad revenue, or contributed to the community for free.
If a model was to add attributions to each of its answers, then perhaps the search engine analogy would hold. But, they don't (and right now, to my understanding, can't.)
The AI art models are scraping imagery made by human effort and skill and then more humans label and tag it so it can be indexed, ( because you can show a computer an image all day long and it still won't 'learn' what it is unless you tag it) and then another human puts in their wishlist of art they want (without any cost or effort to learn skill) and the 'AI' displays back a collage of the content from the wishlist. And then presto all sorts of merchandise are available with this art taken without permission from the people that made it. People do the physical action of creating imagery, AI indexes it.
If a model was to add attributions to each of its answers, then perhaps the search engine analogy would hold. But, they don't (and right now, to my understanding, can't.)