When more than 1 company has "AGI", or whatever we're calling it, and people realise it is not just a license to print money.
Some people are rightly pointing out that for quite a lot of things right now we probably already have AGI to a certain extent. Your average AI is way better than the average schmuck on the street in basically anything you can think of - maths, programming, writing poetry, world languages, music theory. Sure there are outliers where AI is not as good as a skilled practitioner in foo, but I think the AGI bar is about being "about as good as the average human" and not showing complete supremacy in every niche. So far the world has been disrupted sure, but not ended.
ASI of course is the next thing, but that's different.
I think the AI is only as good as the person wrangling it a lot of the time. I think it's easy for really competent people to get an inflated sense of how good the AI is in the same way that a junior engineer is often only as good as the senior leading them along and feeding them small chunks of work. When led with great foresight, careful calibration, and frequent feedback and mentorship, a mediocre junior engineer can be made to look pretty good too. But take away the competent senior and youre left pretty lacking.
I've gotten some great results out of LLM's, but thats often because the prompt was well crafted, and numerous iterations were performed based on my expertise.
You couldn't get that out of the LLM without that person most of the time.
> I think the AI is only as good as the person wrangling it a lot of the time.
To highlight the inverse: If someone truly has an "AGI" system (the acronym the goalposts have been moved-to) then it wouldn't matter who was wrangling it.
Nah. The models are great, but the models can also write a story where characters who in the prompt are clearly specified as never having met are immediately addressing each other by name.
These models don't understand anything similar to reality and they can be confused by all sorts of things.
This can obviously be managed and people have achieved great things with them, including this IMO stuff, but the models are despite their capability very, very far from AGI. They've also got atrocious performance on things like IQ tests.
Yeah, that framing for LLMs is one of my pet-causes: It's document generation, some documents resemble stories with characters, and everything else (e.g. "chatting" with an LLM) is an illusion, albeit an impressive and sometimes-useful one.
Being able to generate a document where humans perceive plausible statements from Santa Claus does not mean Santa Claus now lives inside the electronic box, that flying sleighs are real, etc. The principle still holds even if the character is described as "an intelligent AI assistant named [Product Name]".
I don't understand your comment. If I phrase it in the terms of your document view I'm trying to say in my comment is that even though the models can generate some documents (computer programs, answers to questions) they are terrible at generating others, such as stories.
I'm underlining that "it's a story, not a conversation" is indeed the direction we need to think in when discussing these systems, where an additional step along that direction is "it's a document which humans can perceive as a story." That's the level on which we need to engage with the problem, asking what features of a document seem wrong to us and why it might have been iteratively constructed that way.
In the opposite direction, people (understandably) fall for the illusion, and start operating under the assumption that they are "talking to" some kind of persistent entity which is capable of having goals, beliefs, or personality traits. Voodoo debugging.
“AGI” will be whatever state of the art we have at the time the money runs out. The investors will never admit that they built on sand but declare victory by any means necessary, even if it's hollow and meaningless.
Some people are rightly pointing out that for quite a lot of things right now we probably already have AGI to a certain extent. Your average AI is way better than the average schmuck on the street in basically anything you can think of - maths, programming, writing poetry, world languages, music theory. Sure there are outliers where AI is not as good as a skilled practitioner in foo, but I think the AGI bar is about being "about as good as the average human" and not showing complete supremacy in every niche. So far the world has been disrupted sure, but not ended.
ASI of course is the next thing, but that's different.