But that also exists in the AI world. It’s called „fine tuning“: a LLM trained on a big general dataset can learn special knowledge with little effort.
I’d guess it’s exactly the same with humans: a human that received good general education can quickly learn specific things like C.
Humans have experienced an amount of data that absolutely dwarfs the amount of data even the largest of LLMs have seen. And they've got billions of years of evolution to build on to boot
The process of evolution "from scratch", i.e. from single-celled organisms took billions of years.
This is all relevant because humans aren't born as random chemical soup. We come with pre-trained weights from billions of years of evolution, and fine-tune that with enormous amounts of sensory data for years. Only after that incredibly complex and time-consuming process does a person have the ability to learn from a few examples.
An LLM can generalize from a few examples on a new language that you invent yourself and isn't in the training set. Go ahead and try it.
Humans look at a few examples and extrapolate…