Could a human-like artificial intelligence ever be a “person” for purposes of protection under the Fourteenth Amendment?
If the idea of an AI suing for civil rights is too science-fictiony for you… can Koko the Gorilla, who can communicate her wishes via sign language, bring a habeas petition for release from her cage at the zoo?
I’ve often thought about this, it’s a favorite topic. What do you define as a person? Is it the ability to reason, think, feel? Robots may eventually be able to do this. To desire? This too.
One of Asimov’s short stories brings up the interesting point that a robot that was truly made according to the Three Laws and with enough technology would be better at law & justice than any human. (Nothing personal, Bricker - just remembered you were a lawyer.) He would alwyas be just and fair, always know what the law stood for and what the law meant, and he would go out of his way not to hurt humans unnecessarily. He would only hurt an individual human for the greater good, so he would put a murderer into jail but argue against the death penalty.
He would also know when to step down, because he could not hurt humanity’s feelings by having it known he was a robot.
It makes sense to me, and it makes you wonder - would they make better voters, too? And if we put our lives in the hands of an intelligent machine, and the law becomes, “You shall not harm, or through inaction allow to be harmed, humanity,” rather than an individual human, could they possibly do better than us? (Another one of his short stories).
Legally, I’m not sure if the definition of person encompasses anything but “born human being”. Morally, I believe any conscious, sentient being is a person.
Koko, now, I’m not sure has the sentience of even a very small child; the transcripts I have seen take a LOT of extrapolation to make her “saying” when they say she says (http://www.geocities.com/RainForest/Vines/4451/KokoLiveChat.html). Plus some say her “signing” is a lot of extrapolation, too. If she did have the sentience of a human, I would consider her a “person”.
I’m no lawyer, but it seems to be strongly implied in our law that mind = person. After all, isn’t that why brain death is considered the death of a person ? We’ll transplant the organs of someone with a dead brain, but not someone with a dead kidney.
Also, there’s a good chase that A.I.s ( at least some ) will be based on the human brain structure - it’s the one form of intelligence we know works, after all. For that matter, some “A.I.s” may be more or less direct copies of human minds. If the original deserves legal personhood, why not a copy ?
There’s also a practical consideration. After all those stories about the machines rebelling against humanity, is it really a good idea to give them a legitimate reason ? This is especially true of any copy of human minds, since we know from history just how hostile humans can get when treated like chattel. The idea that we’ll just program restrictions into them won’t work; either the A.I.s will find a flaw or some human sympathiser will find a way to hack the restrictions. It would be unwise to create a bunch of A.I.s then create a situation that automatically makes them our enemy.
It depends on how they define a person. As it currently stands I don’t think many people would consider a machine or an ape to be a person. Also, Koko’s ability to communicate is extremely limited and there’s a lot of doubt that she even does it on the same level as a human.
I feel fairly certain that our nation would never allow anyone not born as a human being to become a citizen (person, maybe … lawyers have argued that corporations have some of the rights of human beings, including some in the Bill of Rights). In fact, that seems to be the ONLY requirement, else those of extremely low intelligence, those in vegetative states, etc., would not have the same protections and rights. An intelligent nonhuman entity might be able to secure some protections for itself (right not to be destroyed, property rights perhaps) but not, I think, citizenship and voting rights. Hell, we don’t allow most humans on this planet to vote in our country, and we only in the past 80 years decided it was OK for women to vote (and only in the past 40 made serious efforts to ensure African-Americans would actually be able to vote).
Whether a sentient non-human should have the right to vote depends, it seems to me, on whether our system of government is intended to confer its benefits on sentient non-humans. I think the answer to that is clearly no, so robots don’t get to vote unless we decide to change the rules and let 'em. Whether that would dissuade them from their secret plan to destroy all humanity remains to be seen, of course.
But corporations still don’t have all the civil rights a natural person doesn’t. They can’t vote in elections or run for office (I’m not even sure how the latter would work). At some point in the next few centuries our courts may have to consider what a person is. Disregarding AIs; what if an ape or other animal was genetically modified to greatly increases it’s intelligent and ablitity to communicate? Or what if an ape/human hybrid was created in a lab? If these things are possible then they will happen.
But why should we have that barrier? As you said, we have people of low IQ that vote regularly. And we have made leaps and bounds in letting non-white males vote, sure it took a long time but it happened.
What happens when we do make a robot that has an IQ as high as any human, passes every version of the Turing test possible, reacts and responds emotionally, perhaps even has a biological body so we can’t tell.
Attorney: “Your honor, the plaintiff has no standing to bring this suit, being an unfeeling and emotionless bag of bolts! Further, we ask that the court instruct the plaintiff to compose himself, stop crying and refrain from further courtroom dramatics.”
I would argue that the concept of person, legal and philosophical is essentially based on functional capacity, not a particular genetic substructure or even chemical makeup. This is why an intelligent alien race with similar moral capacities to human beings is not ok to murder simply because it happens to be the wrong species. Likewise with robots or native species that have the capacity relevant to various moral or legal rights. This is also why stem cells cannot sensibly be legal or moral persons: there is hardly any way they could be LESS like the sort of being we are used to when dealing with a “person.”
The same is true for a dead body. Someone that has died is essentially a thing that has now permanently ceased to have the functional capacity relevant to being a person. However, here is where it gets interesting. Death is currently defined in a very very sloppy way, because it isn’t seriously a measure of actual functional capacity still in place, but rather simply a set of states. The technology necessary to revive the dead has become better and better as time goes on, and this then exposes the problem with the definition of death.
Suppose a murderer causes a person’s heart to cease to beat for a period of time and they then become legally dead. They are physically in NO WAY different from any other dead body. And yet, if we have the technology to revive the person (meaning there is enough of th formerly existing functional capacity left to fix or revive), then wouldn’t hacking up the body (and thus preventing them from being revived with this technology) be tantamount to a SECOND act of murder. Or was the first act not really a murder at all?
If we can revive a dead body, then it is really far more like a sleeping person than it is like a non-person, and the true measure of personhood is the remaining functional capacity that technology can preserve (it clearly degrades over time, and we would have to admit that if someone’s brain degraded away into nothing, and we grew new tissue to replace it to revive the body, we would not really be reviving the previous person but rather creating a new one).
All fascinating philosophical issues, and that’s my first brush take on things.
Interesting that you bring this up…The Great Ape Project is an organization that aims to include the non-human great apes within the community of equals by granting them the basic moral and legal protection that only human beings currently enjoy.
Just gotta point out that this is, in essence, the point of the short story “I, Robot” by Eando Binder (who used the title years before Asimov, and whose story has twice been dramatized on different incarnations of The Outer Limits). It’s also the point in some Asimov stories (as has been noted), esp. “Bicentennial Man” Bricker gets big points for knowledge of SF history.