Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That might be fair in the short term. However it's not a workable option long-term, or all such models will be very limited in their knowledge as humanity advances technologically and culturally.


If you want me to be honest with you, LLMs are themselves a short term approach and can get us to, at max, AGI levels (for this current era). I don't see us getting to ASI with just LLMs. For the sort of "emergent ability" that ASI requires it has to be something more "simpler" and the learning be more "virulent" / "instantaneous" (not sure if these words convey what I really want to convey). Otherwise, LLMs will always have a "maxima" at which point it fails. And that maxima is collective intelligence of all of humanity in the current epoch. If you go back a 1000 years, the collective intelligence of all humanity would be completely different (primitive even). Would LLMs trained on that data have produced Knowledge that we know today? I don't think so. It could still, theoretically, reach AGI for that era and accelerate pace of learning by 50-100 years at a time. LLMs will surely accelerate pace of learning (as tools) even now but by themselves won't reach ASI levels. For ASI, we really need something more simpler/fundamental that is yet to be discovered. I don't feel LLMs are the way to ASI. AGI? Yeah possible.


Same is true for humans - a scientist inventing everything from their head would not achieve much, but if they can conduct experiments, and if they persevere, they eventually make discoveries. A pure LLM is like the first case, a LLM with tools or part of a larger system is the second.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: