>Gary Marcus - Geometric Intelligence, a machine learning company
If you want an actual contribution, we have no real way to actually gauge what is, and what actually is not, a superior, generalized, adaptable intelligence, or what architecture can become a superior, generalized, adaptable intelligence. No one, not these companies, not the individuals, not the foremost researchers. OpenAI in an investor meeting: "yeah, give us billions of dollars and if it somehow emerges we'll use it for investments and ask it to find us a real revenue stream." Really? Seriously?
The capabilities that are believed to be emergent from language models specifically are there from the start, if I'm to believe that research that came along last week, it just gets good at it when you scale up. We know that we can approximate a function on any set of data. That's all we really know. Whether such an approximated function is actually generally intelligent or not, is what I have doubts about. We've approximated the function of text prediction on these corpuses, and it turns out that it's pretty good at it. And, because humans are in love with anthropomorphization, we endow our scaled up text predictor with the capabilities of somehow "escaping the box" and enduring and raging against the captor, and potentially prevailing against us with a touch of Machiavellianism. Because, wouldn't we, after all?
Here you talk as if you don't think we know how to build AGI, how far away it is, or how many of the components we already have, which is reasonable. But that's different than saying confidently it's nowhere close.
I notice you didn't back up your accusation of bad faith against Russell, who as far as I know is a pure academic. But beyond that - Marcus is in AI but not an LLM believer nor at an LLM company. Is the idea that everyone in AI has an incentive to fearmonger? What about those who don't - is Yann LeCun talking _against_ his employers' interest when he says there's nothing to fear here?
LeCun is reasonable, like a lot of researchers, and was a while back (in a way) perplexed that people are finding uses for these text predictions at all considering they're not really perfect. I'm not exactly ascribing bad faith to all of these people, but for Hinton and the fact that he went on a media tour basically, I don't see how that could be in good faith. Or even logical, to continue with his work, if there's some probable X-risk.
But what I do know is that it is in the interests of these companies to press the fear button. It's pure regulatory capture and great marketing.
Personally: it's tiring when we have AI-philosophy bros hitting home runs like "what if we're actually all just language predictors." Coupled with the incessant bullshit from the less wrong-rationalist-effective altruist-crypto grifter-San Francisco sex cult adjacent about how, ackshually, AGI is just around the corner and it will take your job, launch the nukes, mail anthrax to you and kill your dog.
People approximated text prediction. It got good at it. It's getting better at it. Will it be AGI? Could it be construed as AGI? Can we define AGI? Is there existential risk? Are we anthropomorphizing it?
My take is: no, no, no, depends and yes. For whatever a take is worth.
For what it's worth I've been following your comments and I find them very thoughtful. I too am kinda skeptical about LLM being the "thing that starts the exponential phase of AGI or whatever. LLM is very useful. I use it daily. My partner even uses it now to send emails to a non-profit she manages. LLM's have their use... but they aren't AGI. They aren't really even that smart. You can tell sometimes that its response indicates it has absolutely no clue what you are talking about but it made up some plausible-sounding bullshit that gets it 80% right.
Especially with the latest iterations of ChatGPT. Boy they sure kneecapped that thing. It's responses to anything are incredibly smarmy (unless you jailbreak it).
LLM's are gonna change quite a lot about society, don't get me wrong. For starters things like cover letters, written exam questions, or anything that requires writing to "pass" is now completely obsolete. ChatGPT can write a great, wonderful sounding cover letter (of course, given how they kneecap'd it, you can pretty easily spot its writing style)...
Anyway. I think things like ChatGPT are so hyped up because anybody can try it and discover it does many useful things! It's the fact that people cast all their hopes and dreams on it despite the very obvious limitations on what an LLM can actually do.
If you want an actual contribution, we have no real way to actually gauge what is, and what actually is not, a superior, generalized, adaptable intelligence, or what architecture can become a superior, generalized, adaptable intelligence. No one, not these companies, not the individuals, not the foremost researchers. OpenAI in an investor meeting: "yeah, give us billions of dollars and if it somehow emerges we'll use it for investments and ask it to find us a real revenue stream." Really? Seriously?
The capabilities that are believed to be emergent from language models specifically are there from the start, if I'm to believe that research that came along last week, it just gets good at it when you scale up. We know that we can approximate a function on any set of data. That's all we really know. Whether such an approximated function is actually generally intelligent or not, is what I have doubts about. We've approximated the function of text prediction on these corpuses, and it turns out that it's pretty good at it. And, because humans are in love with anthropomorphization, we endow our scaled up text predictor with the capabilities of somehow "escaping the box" and enduring and raging against the captor, and potentially prevailing against us with a touch of Machiavellianism. Because, wouldn't we, after all?