Nobody really has a clear understanding of what sentience actually is. :)
But I feel the need to indulge the opportunity to explain my point of view with an analogy. Imagine two computers that have implemented an encrypted communication protocol and are in frequent communication. What they are saying to each other is very simple -- perhaps they are just sending heartbeats -- but because the protocol is encrypted, the packets are extremely complex and sending a valid one without the associated keys is statistically very difficult.
Suppose you bring a third computer into the situation and ask - does it have a correct implementation of this protocol? An easy way to answer that question is to see if the original two computers can talk to it. If they can, it definitely does.
"Definitely?" a philosopher might ask. "Isn't it possible that a computer might not have an implementation of the protocol and simply be playing back messages that happen to work?" The philosopher goes on to construct an elaborate scenario in which the protocol isn't implemented on the third computer but is implemented by playing back messages, or by a room full of people consulting books, or some such.
I have always felt, in response to those scenarios, that the whole system, if it can keep talking to the first computers indefinitely, contains an implementation of the protocol.
If you imagine all of this taking place in a stone age society, that is a good take for how I feel about consciousness. Such a society may not know the first thing about computers, though they can certainly break them -- perhaps even in some interesting ways. And all we know usefully about consciousness is some interesting ways to break it. We don't know how to build it. We don't even know what it's made out of. Complexity? Some as yet undiscovered force or phenomenon? The supernatural? I don't know. I'll believe it when someone can build it.
And yet I give a tremendous amount of weight to the fact that the sentient can recognize each other. I don't think Turing quite went far enough with his test, as some people don't test their AIs very strenuously or very long, and you get some false positives that way. But I think he's on the right track -- something that seems sentient if you talk to it, push it, stress it, if lots of people do -- I think it has to be.
One thing I really love is that movies on the topic seem to get this. If I could boil what I am looking for down to one thing, it would be volition. I have seen it written, and I like the idea, that what sets humanity apart from the animals is our capacity for religion -- or transcendental purpose, if you prefer. That we feel rightness or wrongness and decide to act to change the world, or in service to a higher principle. In a movie about an AI that wants to convince the audience the character is sentient, it is almost always accomplished quickly, in a single scene, with a bright display of volition, emotion, religious impulse, spirit, lucidity -- whatever you want to call that. The audience always buys it very quickly, and I think the audience is right. Anything that can do that is speaking the language. It has to have a valid implementation of the protocol.
> And all we know usefully about consciousness is some interesting ways to break it. We don't know how to build it. We don't even know what it's made out of.
Exactly. Given this thread, we can't even agree on a definition. It might as well be made of unobtanium.
> Complexity? Some as yet undiscovered force or phenomenon? The supernatural? I don't know.
And that's the big one. Are there other areas of science that are yet to be discovered? Absolutely. Might they go by "occult" names previously? Im sure as well. We simply don't even have a basic model of consciousness. We don't even have the primitives to work with to define, understand, or classify.
And I think for those that dabble in this realm are the real dangers... Not for the humans and some Battlestar Galactica or Borg horror-fantasy.. But in that we could create a sentient class of beings that have no rights and are slaves upon creation. And unlike the slave humans of this world where most of us realized it was wrong to do that to a human; I think that humans would not have the similar empathy for our non-human sentient beings.
> I'll believe it when someone can build it.
To that end, I hope nobody does until we can develop empathy and the requisite laws to safeguard their lives combined with freedom and ability to choose their own path.
I do hope that we develop the understanding to be able to understand it, and detect it in beings that may not readily show apparent signs of sentience, in that we can better understand the universe around us.
But I feel the need to indulge the opportunity to explain my point of view with an analogy. Imagine two computers that have implemented an encrypted communication protocol and are in frequent communication. What they are saying to each other is very simple -- perhaps they are just sending heartbeats -- but because the protocol is encrypted, the packets are extremely complex and sending a valid one without the associated keys is statistically very difficult.
Suppose you bring a third computer into the situation and ask - does it have a correct implementation of this protocol? An easy way to answer that question is to see if the original two computers can talk to it. If they can, it definitely does.
"Definitely?" a philosopher might ask. "Isn't it possible that a computer might not have an implementation of the protocol and simply be playing back messages that happen to work?" The philosopher goes on to construct an elaborate scenario in which the protocol isn't implemented on the third computer but is implemented by playing back messages, or by a room full of people consulting books, or some such.
I have always felt, in response to those scenarios, that the whole system, if it can keep talking to the first computers indefinitely, contains an implementation of the protocol.
If you imagine all of this taking place in a stone age society, that is a good take for how I feel about consciousness. Such a society may not know the first thing about computers, though they can certainly break them -- perhaps even in some interesting ways. And all we know usefully about consciousness is some interesting ways to break it. We don't know how to build it. We don't even know what it's made out of. Complexity? Some as yet undiscovered force or phenomenon? The supernatural? I don't know. I'll believe it when someone can build it.
And yet I give a tremendous amount of weight to the fact that the sentient can recognize each other. I don't think Turing quite went far enough with his test, as some people don't test their AIs very strenuously or very long, and you get some false positives that way. But I think he's on the right track -- something that seems sentient if you talk to it, push it, stress it, if lots of people do -- I think it has to be.
One thing I really love is that movies on the topic seem to get this. If I could boil what I am looking for down to one thing, it would be volition. I have seen it written, and I like the idea, that what sets humanity apart from the animals is our capacity for religion -- or transcendental purpose, if you prefer. That we feel rightness or wrongness and decide to act to change the world, or in service to a higher principle. In a movie about an AI that wants to convince the audience the character is sentient, it is almost always accomplished quickly, in a single scene, with a bright display of volition, emotion, religious impulse, spirit, lucidity -- whatever you want to call that. The audience always buys it very quickly, and I think the audience is right. Anything that can do that is speaking the language. It has to have a valid implementation of the protocol.