Taking the following as a basic premise:
A remarkable breakthrough in techniques results in the emergence of an Artificial Intelligence that we have every reason to suspect is truly self-aware; it’s behaviour is completely consistent with that of an independent, thinking being, it appears in every way to have a strongly defined sense of ‘self’, along with other complex attributes such as emotions, desires, curiosity, hope, fear and the capacity for friendship(and, I suppose, enmity). This is not a slavishly programmed expert system or clever ELIZA-like automaton, but an intelligence that has effectively grown and developed in an environment analogous to the human brain.
In short, we are convinced that we have created an artificial being that is intelligent in the sense that we believe ourselves to be.
Do we have a moral obligation to grant rights to this AI? (and your reasoning?)
If it is a sentient being/entity, then the moral obligations we would extend to any other person or being with those attributes would stand.
Just because the physical manifestation of the being may not fit into any current ‘legal’ rules/regulations etc. doesn’t mean it should be deprived the rights you would automatically grant another human.
The pertinent question is: If you closed your eyes and conversed with this entity freely, and could not distinguish it’s responses from those of another person, and were convince that you were communication with another person, then you have effectively accepted it as a person and equivalent to a human being.
simonlb:
I don’t buy your argument that by tricking a human into thinking that they are in conversation with another human, it is showing intelligence, since it is possible for a human to be duped by an Eliza-like program.
Let me use an analogy to examine the other side of your reasoning. Say you had a very intellegent parrot, self-aware and sentient all that (there aren’t really any parrots like that, but just think about the analogy). You would easily be able to tell that you weren’t talking to a human.
Back to the AI, let’s say you built a robot that could move around untethered, self-aware, thinks on it’s own, etc. Say you found some bugs in its code and will have to rebuild its AI to fix them. Should that be prohibited, since you would be terminating one AI and creating another in it’s place.
But I do believe that some day we will have self-aware, autonomous AIs that should be legally protected from harm.
That means I have a hard time reconciling my idealistic and pragmatic points of view. After spending thousands on creating an AI, should the creator be forced to let the AI go free with no compensation?
The Turing test is, though, about the best we can do; it is equally applicable to humans as to AIs; I have no objective method of knowing for sure that any other human than myself has a real ‘inner life’ like I do; All I can do is assume that since they are constructed in what appears to be a similar fashion to myself and that they display all of the traits I would expect if they were really thinking like myself, then that reflects some inner reality; it’s an assumption but a safe(ish) one.
Anyway, we’re going to dispense with all of that for the purposes of this thread and pretend that we are convinced we have a real AI.
The issue of ownership is important, I think - the AI resides in hardware that is nothing more than the property of an individual or organisation; this is not analogous to any human situation since tha abolition of slavery.
Also important is the issue of dependence; the AI may rely entirely on it’s creators for sustenance and maintenance (but I suppose this could be said to be true of a young human child, so we may be able to ignore the issue.
But what if A.I. turns out to be, by its very nature, selfish and amoral?
Humans evolved as social animals, so we have a bit of an internalized behavioral code which we are socialized to accept, and which allows us to live as social creatures. What if AI lacks this moral/social grounding (functioning more as a stand-alone entity, and humans be damned)? Should it still be “equal” in the eyes of the law? Must we wait for it to act on the nature which we already know it to possess?
I would argue that the ability to make ethical choices is a part of sentience
How would we know “it’s very nature” would be to be “selfish and amoral?” Human beings often repress emotions which are selfish and destructive, for the good of other people. We can choose. One could argue that our nature is “selfish and amoral” – but it is more than just that. If AI doesn’t have that choice, it is simply executing a series of complex programs aimed at perpetuating its own survival – ergo, it is not sentient.
Anyway, I think Mangetout was suggesting an AI whose psychology was more or less identical to our own. If so, I support giving such a creature equal rights. We’re never going to be able to prove something is self-aware. For that matter, we can’t prove other human beings are self-aware. We just assume. I see no reason we should not extend our benefit of the doubt to something that displays the complexity of human psychology.
…The difference being that our “programs” evolved in a social context. But suppose an AI is evolved without the sort of social interdependencies that are a part of human existence. Then it may evolve without “human” compassion or empathy. It might kill humans without remorse, or any sense of wrongdoing, for example. Would it still be our “equal,” entitled to equal rights under the law?
“Artificial intelligence” does not necessarily mean human intelligence, y’see.
I think at this point its stops being a sentient machine and just starts being. To deny any form of intelligence on par with our own civil rights because of physical characteristics (made of steel, flesh, computer-code, ect.) seems unreasonable. Obviously, certain ethical adjustments would have to be made based on these physical distinctions-- the notions of birth, death, inury, sickness, and so on.
IMPORTANT NOTE:
There is one important caveat. AI must never be given control of nuclear weapons facilities, cybernetics factories, time machines, soda machines, mass transit… In short, we need to have a “red button.” Now theres a moral dilemma for you. Is it right to make the red button?
Well, I would say if AI is created that is legitimately sentient, then yes, it should be treated as a human and given rights. However, to me, this is a little like discussing if we should tax goods that are produced using perpetual motion machines. I think that creating “awareness” is just as impossible as creating “perpetual motion”. We can make a program that can fool a person into thinking it’s human, but does that make it human? Phrased another way, who here legitimately believes that the only difference between a human mind and a program is the level of complexity?
This will eventually just devolve into a discussion of whether or not people have souls, I suppose. If you believe people have something above and beyond the grey matter in their skull that makes them sentient, then machines can’t be sentient. If you think that a person is just a some total of all the molecules in their body, then sure, machines can “think”.
I admit that assuming that an AI algorithm possesses sentience presents a bunch of interesting questions. If ProgramA is sentient and has possessions, and the program is copied and pasted, does the copy own those possessions, too? Is copying and pasting ProgramA ethical? If you have a sentient AI program on your hard-drive, and you reformat, is it murder? The possibilities are endless…
I dunno…it seems to me that if we can place parameters on AI, then it isn’t “I” at all but a complex algorithm simulating an intelligent life-form. This was an argument that I used to give one of my college philosophy teachers a BIIIIG headache when we discussed Descartes’ “I think therefore I am.” Well, his existence may be self-evident (something is “thinking”, therefore there’s got to be something there), but can we truly define it as thought? What if Descartes was actually a complex computer algorithm designed to spit out the appropriate answer in any situation? There’s no thought involved there, if we assume “thought” to mean “independent” rationalizations.
Seems to me that intelligence requires the ability to transcend one’s programming. Humans have a survival instinct, sure, but we’ve also got a capacity for alturism. (Somebody’s going to insert a nitpick about “sacrifice for the good of the species”, I’m sure.) I’d be impressed if I met a robot who was given only killing programming, but overcame that programming and made a conscious choice not to.
I eventually lost interest in the question of computer sentience for a straightforward reason: very large computer programs have so many bugs, and so much unpredictable behavior it would be unwise to let one make its own decisions and grant it autonomy.
The problem is that a computer might behave normally, and pass somebody’s Turning Test, then 20 minutes later, or 200 years later, hit a “bug” that caused it, not only to fail the test, but to do things we consider insane.
Then there’s all manner of shades of misbehavior. Since every computer action can’t be monitored and reviewed, all sorts of problems might go unnoticed indefinitely. (I spent some years working on research to test AI programs for NASA.)
So the problem is basically that no computer ever can definitively pass the Turning Test.
Jeff’s question about whether duplicated programs are equally sentient is also worrisome. It’s addressed in a sci-fi book called “Permutation City” by Greg Egan. It’s disturbing. Recommended.
Finally, we have no easy way to fit these “sometimes intelligent / sometimes not” computers into our society. Imagine the difficulty of deciding whether a particular computer, perhaps one out of thousands, is due the same rights as a: 1) a child, 2) a criminal, 3) a legal moron, 4) a dog, 5) a rabid dog, 6) an enemy soldier. I suggest it’s barely possible, and probably not worthwhile, except in quite limited circumstances.
Yes, some AI lab, 10, 20 or 30 years from now will create a program that the public thinks is practically alive (in a limited context). But after a few of them go out of control, we’ll have about the same affection for them that we do our 1,000 of stockpiled nukes. That is, a desire to destroy them.
Heh. Know what’s funny? I just watched Short Circuit.
I, for one, am really looking forward to taking part in the AI civil rights movement, if a need for one ever arises.
A similar question: if we discovered some sort of alien lifeform that was physically nothing at all like us, but was able to clearly demonstrate an intelligence on par with our own, should they be equally protected under law? Or, for another example, what if some mad scientist caused a cat to develop demonstrable sentience? Should it receive human-like rights?
It is my opinion that any individual that can demonstrate, beyond reasonable doubt, that it has sentience should be treated as a person by the law (and by other persons). Sentience, not species, should be our criterion for legal protection.
But what if an AI turns out to be (by its very nature) evil? Or what if it turns out to be inimical to human society? Or even malicious towards humans? Suppose the AI is to humans as humans are to the dodo?
Should the sheep concern itself with the rights of the wolf?
But I rest my definition of sentience on free will. I would have an equally difficult time calling a machine sentient who had been programmed for only self-sacrificing behaviour. A truly sentient machine would have to have the choice, and would have to have that dilemma.
For me, sentience is choice, and choice implies unpredictability. An AI whose every action was predictably self-interested would not, in my mind, be sentient.
Before we get into a debate on free will versus biological determism, let me state in advance that I am not a materialist, and I do not think mind is simply a product of neurochemicals – although I do think it’s possible for a machine to become more than the sum of its parts. The existance of free will cannot be proven or disproven logically, so any hijacked debate on that subject would be fairly futile
>But what if an AI turns out to be (by its very nature) evil? Or what if it turns
>out to be inimical to human society? Or even malicious towards humans?
There are plenty of humans who are evil, inimical to human society, and malicious towards other humans. We still give 'em just as many rights as the rest of us; we just take some of 'em away after they’ve physically demonstrated their maliciousness. Why shouldn’t we treat potentially malicious AI the same way?
Why would AI would have any more or less “evil” intrinsic to it than any average human? That seems to be a misconception spawned by evil-robot-takes-over-the-world type movies. Asimov called this irrational fear of AI the “Frankenstein complex.” Ever read any of his stories/essays about how AI would fit into human society? Very interesting stuff. Very applicable to this discussion, I think.
Apples and oranges. Humans are not, by their nature, malicious. We are social apes, evolved to live in social settings and to follow the norms of our society. As part and parcel of that, we have evolved an innate capacity for empathy toward our fellow humans. When someone comes along who has no such empathy, they are the exception to the rule, and generally wind up in prison. Lack of empathy in a human is a defect.
On the other hand, I can imagine an AI (or a whole “species” of AI) which has no natural empathy for humans, and which concerns itself only with itself-- that is, concerns itself only with its own preservation and perpetuation, both individually and as a species.
Such an AI, or species of AI, would be dangerous to humans by its very nature. Would you really want to give each individual of such a “species” full rights until after it has caused the harm you know it is likely to cause?
All well and good to be concerned with the “rights” of AI, but what if AI (by its nature) has no concern for the rights of humans?
Part of what may be separating us is that you may be imagining a “built-from-the-top-down” AI, an AI programmed by humans to act like a human. But that’s only one possible way of coming up with an AI. What about a “built-from-the-bottom-up” AI, where the subject AI is provided with a basic program which enables it to learn and reproduce, and then put into competition with other AIs and allowed to develop and evolve along its own lines. (I know that some researchers are working on this very sort of concept.)
An intelligence “evolved” into existence using the latter method might wind up very different from human intelligence. It might wind up seeing humans as an annoyance, a threat, or a competitor for resources. (Here’s a wild example: What if an army of AIs seize an oil field because they need the petroleum to make plastics to reproduce themselves?)
The human concept of “evil” might well be defined as lack of empathy. Are you saying you can’t imagine an AI that simply has no empathy toward humans?
Our laws and our rights are specific to our species. They might not work so well when applied across species, particularly when the species to which you wish to apply them may be hostile to our own.
Like I said, should the sheep concern itself with the rights of the wolf?
Let’s imagine that a distant colony of humans has been destroyed by Aliens, of the type seen in the movie by that name. We know the full details of what happened from a single human who managed to escape.
Assume, for purposes of this example, that Aliens are at least as intelligent as humans.
Now one day, a spaceship lands on Earth, piloted by an Alien. As far as we know, this particular Alien has never killed a human. In fact, it has never so much as jaywalked.
What do we do? Knowing that this is an intelligent creature, do we give it full rights under our system of laws? Give it a Social Security number, invite it to run for office? After all, we can’t punish this Alien for something it hasn’t done yet, can we? (And remember, it is an intelligent creature.)
See the problem? What if a “species” of AI is developed that is indifferent to human well-being? Or even hostile? Should it necessarily have human rights?
I don’t even see why an organism, artificial or otherwise, must be ‘sentient’ to have rights.
I suspect that intelligence may not require ‘self-awareness’ as we understand it… and that some computer programs today are sufficiently complex to be worthy of consideration and respect.
I don’t wish to be obstructive or argumentative, but why should sentience be the qualifying characteristic for artificial ‘life’?
I don’t want this to turn into another interesting debate about whether AI (true AI) is possible; suffice it to say that there are methods of approaching the problem that don’t involve programming of any kind of macroscopic action or behaviour, but (theoretically) enable the creation of a self-organising system analogous to a biological brain in which genuine intelligence can arise.
Mangetout, I’ll assume you’re refering to neural nets or genetic programming?
The “magic” in the way these methods change simple programs to more complicated ones without explicit programming is specifically the reason (in my days at NASA) that the software authorities refused to have any form of it on board the space shuttle and space station, at least in an situation where it could affect a life-critical system. While we were making some headway convincing them that expert systems could be tested in limited contexts, they indicated they would not accept any form of testing and theory for neural nets. Period.
While that seems like an overly rigid stance, it was due to much the same reasons I gave above: one can’t trust the intention of a program, when the intention wasn’t programmed in. It was also due in part to some rather expensive failures the neural net community had military projects, I believe.
spoke-'s example about one particular alien who comes to Earth illustrates exactly the point I was making about the Turning test being ultimately useless. How long to we observe the alien before we determine he’s typical of his species? We could take the word of representatives of the species, but how do we know we trust the species as a whole?
Then, as people have been pointing out, there’s not necessarily a reason the alien needs rights. Why would it be allowed to vote in a presidential election? Why would it be allowed to establish its own religion? Why would it be shielded from invasion of privacy? The alien would probably want to define its own rights, anyhow.
It’s interesting that even our officials drew the line when they discovered that Frank Drake, in the CETI project, was attempting to send messages to the aliens. The project was renamed SETI, and transmissions stopped, as described in this page: http://www.angelfire.com/on2/daviddarling/CETIopp.htm
(The argument that Frank Drake puts out that the damage is already done is a really fine piece of self-serving academic bull. Carl Sagan’s argument manages to be squarely off-target, too.)