Here's the problem with LLMs:
https://arxiv.org/abs/2301.06627
Basically, they're missing a lot of the brain machinery required to function. For example, if you ask them to solve a math problem, they do just fine ... until you ask them to apply an inference rule on top of it that takes them outside of their training set.
The result is something that LOOKS like AGI until you realize it's read the entire Internet.
I suspect that a great many of the people who say AGI is just around the corner and are giving <2030 timelines are saying that because they want it to be around the corner.
Yes. Thank you. This is one of the things that really worries me about aligning AI.
If you're even worrying about how to align human level intelligence, you don't have the capacity to ALIGN THE HUMANS towards the goal of creating safe AI.
In a situation in which all of them are either A) willing to take massive risks on unaligned AI or B) being dragged along by those which are.
Yes, but there's a critical difference. When humans hallucinate, it's normally because they misremember something: did you know, for instance, that our brains have to piece every memory together from scratch every time we recall something? It would be a miracle if humans did NOT hallucinate.
In other words, the human has a concept of truth or facts and simply had a memory lapse. This thing has no concept of truth at all.
Here's my theory: if you look at surveys, it does say a 10% chance or so of an extremely bad outcome.
BUT it says a ~20% chance of an extremely good outcome, and an 80% chance of at least a neutral one.
Simple cost benefit analysis.
To properly explain this would take longer than the length of the comment limit (is there a length limit? I don't know, but even if there isn't I don't feel like explaining this for the 70th time), but here's why:
chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://arxiv.org/pdf/2301.06627.pdf
To sum up: a human can think outside of their training distribution, an LLM cannot. A larger training distribution simply means you have to go farther outside the norm. In order to solve this problem would require multiple other processing architectures besides an LLM, and a human-like AGI cannot be reached by simply predicting upcoming words. Functional language processing (formal reasoning, social cognition, etc) require other modules in the brain.
This explains the various pejorative names given to LLMs - stochastic parrots, Chinese Rooms - etc, etc, etc.
Depends on the researchers. Mathematicians might mind, but I am going to guess (assuming there was a plan in place to make sure they didn't wind up on the street) climate researchers wouldn't mind being made obsolete tomorrow.
Actually ... that's a reasonable goalpost, in my opinion.
Yes, humans make careless mistakes. However, humans mostly make careless mistakes because A) their brains have to reconstruct information every time they use it, or B) they are tired.
LLMs, as piles of linear algebra, have neither excuse. Their training data is literally baked into their weights, and linear algebra does not get tired.