Indeed, in fact the idea that ‘grown’ AI is the way is nothing more than a fudge; we don’t (fully)understand consciousness, so it is convenient to think that if we can design a machine, the operation of which we don’t fully understand, then it could become conscious. But there’s no particular reason why it must.
I still find it highly amusing that all of the talk of illusion and ‘feeling of sentience’ still falls short as these things require a subject/observer - we aren’t just machines that produce all the external signs of consciousness, we actually experience it - or at least I do, or at least I think I do.
I remain unconvinced that a flawless imitation of the outwardly observable aspects of consciousness would necessarily mean that there is anything going on behind the scenes than very intricate levers and pulleys (although again, it will be argued that that’s all there is inside me) - in essence though, a (hypothetical) perfect holographic recording of me hitting my thumb with a hammer might convince a viewer that it is feeling pain, but it wouldn’t really be feeling anything. The measure of consciousness must be not whether it appears to be conscious, but whether it believes itself to be conscious. Information forever out of reach to third parties.
My “outward appearance” would be nothing so superficial as a hologram - I would suggest that the equivalent of a PET scan wherein individual neuronal pathways and synaptic activity associated with a “thought” would be the hypothetical observable imitation - but agreed, Mange, far more research is needed into what we are try to imitate in the first place, and even then we might never know that we’ve succeeded.
But, as robertliguori’s post illustrated above, maybe these simple sense+memory+language operations ARE the experience - perhaps all one needs to create a “subject which believes it is sentient” is these elements combined in a certain way.
True AI has been making gamers all over the planet salivate ever since Matthew Broderick played chess in “War Games”. But I think that it is a much deeper and profound subject than just having someone to play games with, and who cleans up after you.
If a mechanical object is declared to be a true AI under many different definitions: self awareness, self protection, able to heal, able to reason and learn, able to reproduce; then it will by all means, be true life. In a sense that cheapens what humans have known to be life up until now.
If a machine is alive, inside a metal and plastic shell, then we are alive as well, simply inside a shell made of skin. The definition of life changes, and becomes more mechanical.
The maker has as much responsibility as a father and a young child.
My jury is out on whether this would actually be a good thing or not.
Can you provide a cite where these codified restrictions on “robotic” life existed pre-Asimov? I consider myself some-what well read (science fiction wise) and I can’t think of any older uses of this “plot device”. (at least this early in the morning).
Sorry, by ‘well worn’, I meant that Asimov himself used them very extensively and repeatedly, to the point of becoming tiresome (IMO). Perhaps ‘much beloved’ would have been a better phrase.
Judging the sentience of something on whether it has experiences is pointless. By that standard, each individual is the only sentient thing in his universe.
I never miss a thread about either consciousness or cosmology.
(Sometimes I wonder how the heck people have time to think about anything else!)
And Aide, I would suggest you don’t get too hung up on the definitions - this topic by its very nature is seeking workable language almost as its very raison d’etre. We are asking “what are the characteristics of belief-in-a-subject, on a neurophysical level?” - shall we simply see where we can get to before we must declare pointlessness?
But that’s the problem. We’re arguing over how we can confirm the presence of a specific phenomenon in the world without first establishing that it exists in the first place!
If we don’t know what we’re looking for, how can we possibly find it?
As I said, at the very least the illusion exists: My very use of the word “I” necessitates a belief that this unique string of memories fed by sensory input typing these words is a “self”. We must, by experiment, see what it takes to destroy this illusion, eg. going to (comatose) sleep, and use our every available tool to explore how the illusion comes about. Only then might we have reasonable assurance that the circuitry before us is “thinking the same thing”, but I would agree that we might never be sure.
My acceptance that other humans are conscious is based not only on their outward behaviour, but also on the recognition that they are implemented on similar hardware to mine; there’s no way to tell if I’m right in making this assumption.
An interesting corollary: an artificial intelligence could well decide that we humans have no consciousness, since we’re based on such different hardware.
B. F. Skinner, on the other hand, believed that our behavior was totally mechanical–that thought, sentience, whatever-you-want-to-call-it had no effect on behavior, that it was just an interesting side effect of our large brains.
I’ve no expertise here but this is a v.interesting debate. Just some rambling thoughts…
To define sentience - one approach might be to identify the “cut-off point” - at what point does a mind become so damaged that it can no longer be called sentient? What is the borderline, the vital ingredient?
Can I just give you a personal anecdote…
Some years ago I went to sleep in the daytime and woke in a competely dark room. Not only did I not know where I was, but I could not remember my name, anything about my life, what century I was in, or how old I was. This lasted quite a few seconds before memory came back. But during this mini-amnesia there was never any doubt that “I” was “I”. My sense of self was just as strong without any personal memories.
This suggests that there is a vigorous, if basic, sense of self independent of memory, and that it would take more than total amnesia to destroy the “illusion”. I’ve also worked with Alzheimer’s patients and the sense of self seems the last thing to go.
About the thermostat - Ramanjuan asked what is it we do when we decide; what is the difference between the thermostat’s “goal” and a sentient goal? Surely it is that the thermostat cannot IMAGINE a warmer or colder room. It just reacts to temperature. A sentient mind imagines various goals, then makes a decision based on emotion or reason or both. I think any definition of sentience would have to include imagination.
So my list might read:
*The capacity to imagine alternatives
*Emotion - to motivate choices and decisions
*Sense of self
On the other hand, the capacity to imagine requires memory, so I’m probably tying myself in knots here!
Well, I seem to remember hearing something called the “ascription principle” in a philosophy class of mine, where we were discussing something very similar to this. However, now I can not find any references to it online, but if anyone can help me find information on that, I would be grateful.
Ok, on to AI, a good way to start talking about AI is Searle’s Chinese Room, link here. Originally this idea was passed to debunk the idea of a strong AI, but I’ve always felt it ultimately fails at that. Turing introduced the idea of a Turing Machine link here, which modeled human behavior in a computer type way. The Chinese room can be broken down in this way, and in the end, the Chinese room passes the Turing test and, for all purposes, understands Chinese. The man inside might not, but he’s not the subject of the test, he is merely the executive component of the Chinese room. So really, if something emulates consciousness and sentience to the extent that we can not tell if it’s faking or not, then for all purposes, we can say that it is conscious, or sentient. It does not matter how it does this, but if it acts self-aware in all instances, then it is definitely self-aware as far as we can tell.
However, if you were talking to a box, and the box responded to all your questions in such a way that you could not determine that it wasn’t human, it may as well be human to you, at least in the sense that it could hold a conversation as well as any other human. If some machine could act in such a way that anyone ‘speaking’ to it was unable to differentiate it from a human, then why does it matter that it’s a machine? It’s as human as anyone else could be, minus a few problems and things specific to our organic brains and bodies.
One reason big reason that philosophers want to develop some sort of strong AI is that in the process of doing so, we can learn a lot about our own consciousness and how we function. In the end, does it matter whether we are ‘machines’ or ‘human’? At the end of the day, I will be who I am, and you will be who you are, no matter what anyone else says we are. I suppose the biggest problem would indeed be rights for the newly created sentience, would we treat it like Data from ST, or would we treat it like the people in the Matrix did?
i agree. a few months ago, i started this thread on the subject, and i must say, i largely agree with your take on it. searle claims to deal with each objection, but his attempt to deal with the proposed systems hypothesis fails and his test, which he claims is a refutation of the validity of the turing test, claims an understanding of the inner workings of the artifact, which is not what the turing test deals with. also, the strength of his argument lies in our intuition that one can not make a “mind” out of strings and cans, or whatever “ridiculous” example he used. there is no reason, in fact, to assume that such a mind is so ridiculously conceived.
for someone who claims little knowledge of the area, you bring up a very interesting point. i posted something like this in a similar thread recently, but people responded to a different aspect of the post.
suppose we want to replace you (your brain, at least), with an artifact. an artificial brain, as it were. i work in orthopaedics, and for all the hip replacements i see, none of the patients before the surgery are different people after it. suppose we replaced their brains. further, assume we had a mechanical neuron of sorts, and for each cell in the brain, we had a mechanical replacement that mimicked the cell’s functions exactly. then, we replace the brain cell by cell. lastly, we insist as in many neurosurgeries that the patient remain conscious during the surgery. surely when we replace the first cell, it is still the same person. however, at the end, we can’t help but feel the person with a mechanical brain is not the same person who started the surgery.
at what point, then, do they become this new and different person? if they are the same person, suppose we have a warehouse sized computer that mimics the entire brain function, and we hook that up in place of all the brain’s connections. is the person with the warehouse brain a different person? now, suppose we take the brain that we replaced and put it in another body or a robot. do we now have two of the same person? is it one person with two consciounesses? are your answers any different if you think of yourself in the situation presented?
is consciousness as continuous as it seems to be, and whence comes this idea of self?
Well, that is fairly easy to answer- if you have two copies of a single mentality that are not connected, you have two people.
After the act of copying, these two people would have a different set of experiences;
as time passes, they will change; if they meet twenty years later they might hardly know each other.
But if they are connected- by sme sort of bluetooth wirefree technology telepathy for example-
there is the possibility that they may remain the same person; a person with twice the thinking capacity compared to before.
Here is another dilemma, familiar to most fantasists that imagine AI; the human mind is limited to a couple of kilos of jelly, processing information at the speed of neural impulses;
once AI are built, they need not be limited to human speed or size, but could be extended until they are as big as Jupiter or are built into a shell surrounding the sun like a Matrioshka doll… what relationship could the human builders have with such entities?
Ok, let’s take your idea of the mechanical brain. Let’s say we have a patient, and he wants to get rid of all the troublesome organic neurons, and replace them with indestructible synthetic ones that act in the exact same manner but are not so prone to damage or death. So some ‘neural cartographer’ of sorts finds every brain cell in his head, and finds out exactly what each one does, and exactly how each one interacts with all the other ones. For each original cell, he creates one synthetic one, that is identical in function and how it relates to the rest of the brain. So now you begin surgery, and slowly replace each individual cell with it’s synthetic counterpart. Assuming this is done correctly, the person when he wakes up will still be exactly the same. His memories will be intact, his personality will not have changed, he will act exactly the same as he did before the surgery. Now why would you say that he is a different person? All that’s changed is what he is made of, the surgery he underwent was no different than if he simply had a hip replacement or something of the sort.
If, in the end, the new organism, the new robot, or whatever, acts exactly as the initial person, then for all extents and purposes, shouldn’t we treat the new instantiation of that person’s consciousness as we would have treated that person’s original form? Can you give any reason why we wouldn’t?
Well, assuming that all consciousness exists solely in the brain, then you might be right. But there are other schools of thought which suppose that body and mind are separate, dualistic theories like Descartes’. Also, the idea of a ‘soul’, if you clone a brain, does the new one have a soul or a spirit? Not necessarily something that you may believe, but it’s something worth thinking about.