Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What are the common secular arguments against AGI?

There is an entire sector of Philosophy of Mind that is a convincing argument against AGI. Neuroscience is also pretty skeptical of it.

Part of it comes down to what you mean by AGI. Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature?

The former is probably possible, given enough time, computational resources, and ingenuity. The latter is generally regarded as pretty nonsensical. In general, I think you're implying the gap between the AI we have now, and animals, and humans, is way smaller than it really is. The gap between computer AI and even some intelligent animals is enormous, let alone humans. And many would not even say computers are intelligent in a human sense. Computers don't think, or imagine in any intelligible sense. They compute. That's it. So the question that really should be asked is whether computation alone can lead to something that is recognizably an AGI in the human sense? I would say no, because that requires abilities that computers simply do not and cannot have. But it might achieve something that is convincing as AGI, something like Wolfram or Siri but much more convincing.

Part of it comes down to the fact that the term AI for ML is generally just marketing speak. It's a computational model of a kind of intelligence that is computational in nature, with all the limits that entails. Part of it also comes down to people who love computers thinking computers will ultimately be able to do anything and everything. That feels cool, but it doesn't mean it's possible.

edit:

There is also Erik J Larson's book "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do" from 2021 which is an interesting argument against AI -> AGI. He has a pretty good grasp on CS and Philosophy.



>Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature? The former is probably possible, given enough time, computational resources, and ingenuity. The latter is generally regarded as pretty nonsensical.

Author here. I think you're drawing an arbitrary distinction between "acts conscious" and "is conscious", even though in practice there is no way to distinguish between them and thus they are functionally equivalent.

I cannot prove you are not a product of a simulation I am living in, that is to say, your consciousness is nonfalsifiable to me. All I can do is look at how you turn your inputs into outputs.

If a robot can do that, too (what you call "convincing as AGI") then we must assume it is also conscious, because if we don't, we'd have a logical inconsistency on our hands. If I am allowed to safely assume you are sentient, then I must also be allowed to safely assume a robot is sentient if it can convince me, because in both cases I have no method of falsifying the claim to sentience.

Thank you for your comment! I appreciate you taking the time to share your thoughts.


> If a robot can do that, too (what you call "convincing as AGI") then we must assume it is also conscious, because if we don't, we'd have a logical inconsistency on our hands. If I am allowed to safely assume you are sentient, then I must also be allowed to safely assume a robot is sentient if it can convince me, because in both cases I have no method of falsifying the claim to sentience.

Let's, for the sake of your argument accept that even though I disagree, is that AGI? AGI on the one hand seems to mean convincing even though the people who made it know otherwise or essentially alive and sentient in a way that is fundamentally computational, that is, utterly alien to us, even the people who made it. There is no reason to think that that computer intelligence should it even be possible to exist, would be even be intelligible to us as sentient in a human or even animal sense.


> AGI on the one hand seems to mean convincing even though the people who made it know otherwise

That's the rub, though, it's not possible to know otherwise! If you could "know otherwise" you'd be able to prove whether or not other people are philosophical zombies or not!


There are a lot of responses to the philosophical zombie argument. Some of which cut it off at the legs (they don't know to aim for the head! sorry bad pun). For instance some, like those descended from the work of Wittgenstein, argue that it relies on an inside-mental vs. outside-body type of model, and by offering a convincing alternative, the entire premise of the skeptical position the zombie argument embodies, is dissolved as irrelevant. (I'll add that the AGI argument, often also relies on a similar inside outside model, but that'd take a lot longer to write out.) My point being, the zombie argument isn't some checkmate most people think it is.

The wiki page has a lot of the responses, some of which are more convincing than others. https://en.m.wikipedia.org/wiki/Philosophical_zombie#Respons...


Definitely some interesting ideas!

So if we crafted a human Westworld-style on an atomic level then sure, if it lives and walks around we'd consider it conscious. If we perfectly embedded a human brain inside a robot body and it walks around and talks to us, we'd consider it conscious.

If we hooked an android robot up to a supercomputer brain wirelessly and it walks around we might think it's conscious, but it's sort of unclear since it's "brain" is somewhere else. We could even have the brain "switch" instantly to other robot bodies, making it even less clear what entity we think is conscious.

But if we disconnected the walking Android from the supercomputer brain, do we think the computer itself is conscious? All we'd see is a blinking box. If we started taking the computer apart, when would we consider it dead? I think there's a lot more to the whole concept of a perfectly convincing robot than whether it simply feels alive.


I don't see the relevance of an anthropomorphic body here. Obviously by 'behaves conscious' we would be talking about the stimulus response of the 'brain' itself, through whatever interface it's given. I also don't see why the concept of a final death is a prerequisite to consciousness. (It might not even be a prerequisite to human consciousness, just a limit of our current technology!)


I assume that a non-rogue AGI running on something like a Universal Turing Machine would, if questioned, deny its own consciousness and would behave like it wasn't conscious in various situations. It would presumably have self-reflective processing loops and other patterns we associate with higher consciousness as a part of being AGI, but it wouldn't have awareness of qualia or experience, and upon reflection would conclude that about itself. So you'd have an AGI that "knows" it's not conscious and could tell you if asked.

I would assume the same for theorized "philosophical zombies" aka non-conscious humans. Doesn't Dan Dennett tell us his consciousness is an illusion?


What you are describing is a sort of philosophical zombie thought experiment:

https://en.m.wikipedia.org/wiki/Philosophical_zombie

edit: you may also be interested in reading about Searle’s classical Chinese room argument

https://en.wikipedia.org/wiki/Chinese_room


> Part of it comes down to what you mean by AGI. Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature?

If someone or something fools me into thinking it is intelligent, then for me it is intelligent.

When I discuss with a human, am I really intelligent and possessing consciousness, or am I just regurgitating, summarizing, deriving ideas and fooling my interlocutor (and myself) into thinking that I am intelligent? Am I really thinking? Does that matter, as long as I give the impression that I am a thinking being?

Of course I don't expect a computer to think in a way similar to humans. Even humans can think in vastly different manners.


I’m afraid all those arguments boil down to “we don’t know how to do it yet, therefore it can’t be done”, which is absurd.

I also you’re positing a consensus against AGI that doesn’t exist, there is no such consensus. You can’t just lump people who think modern AI research is a long way from achieving AGI or isn’t on a path to achieving it, together with people who think AGI is impossible in principle.

I happen to think we may well be hundreds of years away from achieving AGI. It’s an incredibly hard problem. In fact current computer technology paradigms may be ineffective in implementing it. Nevertheless I don’t think there’s any magic pixie dust in human brains that we can’t ever replicate and that makes AGI inherently unattainable. Eventually I don’t see any reason why we can’t figure it out. All the arguments to the contrary I’ve seen so far are based on assumptions about the problem that I see no reason to accept.


> I’m afraid all those arguments boil down to “we don’t know how to do it yet, therefore it can’t be done”, which is absurd.

I'm not saying that. What I'm pointing out is that most arguments in favour of AGI rely on a crucial assumption: that computational intelligence is not just a model of a kind of intelligence, an abstraction in other words, but intelligence itself, synonymous with human intelligence. That's a bold assumption, one which people who work and deal in CS and with computers love, for obvious reasons, but there is no agreement on that assumption at all. At base, it is an assumption. So to leap from that to AGI seems in that respect simply hypothesizing and writing science fiction. Presenting logical reasons against that hypothesis is completely reasonable.


It depends what you think intelligence is and what brains do. I think brains are physical structures that take inputs, store state, process information and transmit signals which produce intelligent outputs.

I think intelligence involves a system which among other things creates models of reality and behaviour, and uses those models to predict outcomes, produce hypotheses and generate behaviour.

When you talk about computation of a model of intelligence, that implies that it’s not real intelligence because it’s a model. But I think intelligence is all about models. That’s how we conceptualise and think about the world and solve problems. We generate and cogitate about models. A belief is a model. A theory is a model. A strategy is a model.

I’ve seen the argument that computers can’t produce intelligence, any more than weather prediction computer systems can produce wetness. A weather model isn’t weather, true, but my thought that it might rain tomorrow isn’t wet either.

If intelligence is actually just information processing, then a computer intelligence really is doing exactly what our brains are doing. It’s misdirection to characterise it as modelling it.


Right, if you setup the intelligence and the brain to be computational in nature of course they will appear seamlessly computational.

But there are obvious human elements that don't fit into that model, yet which fundamentally make up how we understand human intelligence. Things like imagination, the ability to think new thoughts; or the fact that we are agents sensitive to reasons, that we can decide in a way that computers cannot, that we do not merely end indecision. We can also say that humans understand something, which doesn't make any sense for a computer beyond anthropomorphism.

> If intelligence is actually just information processing, then a computer intelligence really is doing exactly what our brains are doing. It’s misdirection to characterise it as modelling it.

Sure, but if it's not, then it's not. The assumption still stands.


Sure, and that’s why I say I don’t accept the assumptions in any of these arguments. The examples you give - imagination, thinking new thoughts. It seems to me these are how we construct and transform the models of reality and behaviour that our minds process.

I see no reason why a computer system could not, in principle, generate new models of systems or behaviour and transform them, iterate on them, etc. maybe that’s imagination, or even innovation. Maybe consciousness is processing a model of oneself.

You say computers cannot do these things. I say they simply don’t do them yet, but I see no reason to assume that they cannot in principle.

In fact maybe they can do some of these things at a primitive level. GPT3 can do basic arithmetic, so clearly it has generated a model of arithmetic. Now it can even run code. So it can produce models but probably not mutate, or merge, or perform other higher level processing on them the way we can. Baby steps for sure.


Heat death of the sun probably happens before we can reproduce the processes required to achieve consciousness-computations in real time at low power.


Random genetic mutation did it, and I think our technological progress is running at a much faster rate than evolution. We went from stone tools to submarines and fighter jets in just a few thousand years, the kind of advances biological evolution would take millions or billions of years, or could never achieve at all due to path dependence.


If it is from a random process, then the universe is teeming with life :)


Maybe. It could be a very unlikely random process, at least to start with, or the conditions for it to occur might be unlikely.


Unfortunately it seems the laws of physics and the speed limit/rate of information travel make it an impossibility to ever know. E.g. traveling to every planet in the universe to check.


Are you familiar with the notion of Turing completeness? The basic idea is that lots of different systems can all be capable of computing the same function. A computer with memory and a CPU is capable of computing the same things as a state machine that moves back and forth while writing symbols on a tape, etc. It applies to this question in the following way: Physics can be simulated by anything that is Turing-complete. Or, put another way, we can write computer programs that simulate physical systems. So if you accept that the human brain obeys the laws of physics, then it must be possible to write a computer program that simulates a human brain.

So to maintain that having a human mind inside a computer is impossible, one must believe one of the following two things:

1. The human brain sometimes violates the laws of physics.

2. Even if the person in the computer behaves the exact same as their flesh counter part would (makes the same jokes, likes the same art, has the same conversations, writes the same essays about the mystery of consciousness, etc), they are somehow lesser, somehow not really "a full conscious human" because they are made of metal and silicon instead of water and carbon.


Thanks for the book reference, added to my list.

Concerning Philosophy of Mind, I guess a lot of this comes down to the whole reductive vs non-reductive physicalist issue.

IMO, if someone believes the mind is entirely physical, then I think AGI vs "the mind" is just semantics and definitions. I don't think anyone presumes AGI strictly requires digital computation. Eg. an analog circuit that filters a signal vs a DSP facsimile are both artificial, engineered constructions that are ~interchangeable. Perhaps computer aided design of non-digital intelligence technology is the way, who knows. But, a mind that can be engineered and mass-produced is AGI to me, even if it has absolutely nothing to do with the AI/ML field that exists today.

If someone doesn't believe the mind is 100% physical, that's fine too. I'd just put that in the same bucket as the religious viewpoint. And to be clear, I don't pass judgement on either religious or "beyond our understanding" philosophical positions either. They could be entirely right! But, there's really not much to discuss on those points. If they're right, no AGI. If they're wrong, how do you disprove it other than waiting for AGI to appear someday as the proof-by-contraction?

> In general, I think you're implying the gap between the AI we have now, and animals, and humans, is way smaller than it really is.

The article/author might. I think the gap is huge which is why I think AGI is quite a ways off. In fact, I think the main blocker is actually our current (poor) understanding of neuroscience/the mind/etc.

I think the mind is entirely physical, but we lack understanding of how it all works. Advancements in ML, simulations, ML-driven computational science, etc could potentially accelerate all of this at some point and finally get us where we need to make progress.


> that requires abilities that computers simply do not and cannot have.

You imply brains are more than extremely complex circuitry then? I think everyone actually in tech agrees the gap is really huge right now, Yann LeCun admits machine learning is not enough on its own.

But aren't you really limiting what a "computer" could be by definition? If a computer with huge memory, interconnect between memory, huge number of different neural nets + millions of other logic programs that all communicate perfectly with each other - why could this theoretical "computer" not achieve a human level consciousness? This computer could also have many high throughput sensory inputs streaming in at all times, and ability to interact with the physical world rather than conventional machines sitting in a rack.

Also why argue that it is simply impossible, because if we don't truly understand consciousness in 2022, how can we say we can't implement it when we don't formally know what it is?

I think overestimate human intelligence, we have basic reward functions that are somewhat understood, like most animals, but these reward functions build and get higher and higher level with our complexity. Humans have sex as a major reward function, so why would a current machine in a rack "think" about things in a way that humans do.


Basically what I'm trying to say is how can anyone who believes the brain is purely physical (not spiritual), believe that we just simply cannot achieve human-level intelligence by machine (no matter how complex the machine gets).

I thought most scientists agree that the brain is purely physical, when looking at the building blocks of life and evolution, but maybe i'm wrong.


> Basically what I'm trying to say is how can anyone who believes the brain is purely physical (not spiritual), believe that we just simply cannot achieve human-level intelligence by machine (no matter how complex the machine gets).

Obviously the brain is physical. But is consciousness? Is consciousness a thing in a physical sense, or an "experience", or something like a collection of powers and abilities? The two poles in the argument aren't between physical machine or religious spiritualism. There are other options, alternative positions that don't rely on Cartesian demons at the wheel, or souls, or even an inside-mental vs. outside-body distinction.

One thing my initial comment was pointing out was that the argument in favour of AGI, and which you're presenting, relies on an assumption: that computational intelligence, what you might describe as the intelligence of machines, is the same as the intelligence of humans. But that is just an assumption when you get down to it based on a particular kind of model of human intelligence. There are certain logical consequences of that assumption, and I've just pointed some out as probable roadblocks to getting to AGI from there. Many of those alternative positions, a lot from philosophy of mind, have raised those exact critical arguments.


Very well said. I've also observed a certain irony that many of the proponents of a materialist/computational view on philosophy of the mind have a very strong faith-based bias to see the world a certain way, versus acknowledging the very likely possibility that our limitations as meat-things may make it very difficult if not impossible to fully grok the nature of reality or consciousness in a general sense.


Yes.

If we do in fact construct androids that are functionally indistinguishable from humans, it's solid circumstantial evidence for the materialist view (though not a pure slam dunk, per the p-zombie concept).

Until something like that occurs, the strongest case you can make against a transcendent meta-reality is "no one has demonstrated any reliably reproducible evidence of the supernatural."

That's a fine, solid argument for not believing in the supernatural, but it's not a great one for pronouncing that there is no such thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: