I feel like theres a time in near future where LLMs will be too cautious to answer any questions they arent sure about, and most of the human effort will go into pleading the LLM to at least try to give an answer, which will almost always be correct anyways.
It's not going to happen as the user would just leave the platform.
It would be better for most API usage though, as for business doing just a fraction of the job with 100% accuracy is often much preferable than claiming to do 100% but 20% is garbage.
There is nothing useful you can do with this information. You might as well memorize the phone book.
The model has a certain capacity -- quite limited in this case -- so there is an opportunity cost in learning one thing over another. That's why it is important to train on quality data; things you can build on top of.
Just because it's in the training data doesn't mean the model can remember it. The parameters total 60 gigabytes, there's only so much trivia that can fit in there so it has to do lossy compression.