By most measures you could think of for intelligence languages models are improving, so I don’t see why you think this wouldn’t lead to something at least almost human-level if you scaled it up enough
Of course there could be some wall somewhere but I don’t see why there would be
That's "we need a larger cowbell" thinking. It's not a theory of mind, it's wishful thinking that it will.. emerge. Absent theory I don't think moar will make it emerge, no.
If you want theory there’s this: https://arxiv.org/abs/2001.08361 (I haven’t actually read this but I know roughly what it’s about)
It’s saying that so far the abilities of an LLMs have scaled up with its parameter count and training data size. Of course there’s no way to be sure without actually training larger models but I don’t see why the point where it stops would be just after our current best LLMs. Many properties have already emerged from making it bigger so I don’t see why this would be the exception
While it may be true that new data is coming in at a trickle these days, due to things like Discord, Slack, et al. all locking conversation and context up, as well as the daily volume of chapter is small relative to what is out there now.
The fact is that training data can be used in many different ways and I bet you we see the products of that fairly quickly as those who see this same as I do reach a point where they want to show n tell and test.
>The fact is that training data can be used in many different ways and I bet you we see the products of that fairly quickly as those who see this same as I do reach a point where they want to show n tell and test.
Sounds like wishful thinking to overcome the limitations of LLMs.
At the same time we get more and more texts generated by LLMs so it gets harder to get actual man made texts.
Of course there could be some wall somewhere but I don’t see why there would be