Whether you argue against the death penalty or not depends on how much you value life indiscriminately. With the three laws coded into him as absolutes, certainly he would argue such a case but would be doing so based on his programming instead of logic.
So long as he had such programming still able to over ride his logic, we would have to consider him as being enslaved by his creator.
I’d have to say that a judge granting the habeas petition for Koko (or otherwise recognizing a non-human as a person) is engaging in judicial activism at its most foul. This is a legislative question. I would support such a law.
Sua
Yes, the story (not the movie - dear God!) “The Bicentennial Man” is quite good. At one point a court, considering the robot’s legal and personal crusade to be considered a person, says something to the effect of, “Liberty cannot and must not be denied to any being capable of asking for it.”
Sorry–I wasn’t advocating for or against robosuffrage. I was just making an analysis of what I think our country would likely allow. For instance, I think it’s perfectly fine for an atheist to be President of the United States. Do I think it will ever, ever happen? No, of course not. Likewise, it might be the moral thing to allow robots to vote, but I don’t think it will ever happen.
IMO, the real problem with extending human rights or citizenship to nonhumans lies in the basic fact that we don’t have a good enough legal definition of “human” to extend the metaphor to include robots, apes, etc.
What makes something worthy of being treated the same as a human, if we don’t limit our definition strictly to species? Is it intelligence? How do we measure and define that intelligence? Is it the ability to pass a Turing test? Does Deep Blue deserve human status? It certainly has great intelligence of one sort, but no intelligence of any other sort. What if someone designs a convincing computer program that can pass a Turing test? Do we give that program the right to vote? After all, between questions, it’s not thinking deep thoughts–it’s totally idle, waiting for the next input from a human being trying to test it.
Then there’s the problem of human beings that don’t come up to apelike intelligence–we still treat them as fully human. People in vegetative states have enormous mountains of human rights, even though some can’t even breathe on their own and have nothing left resembling a human brain. If we give them rights, why not extend the same rights to beings of like status–why not dogs, cats, cows, trees, sea anemones, bacterial colonies…
For current legal purposes, the only way to qualify for human rights is to be born as a member of the species homo sapiens. Period. Doesn’t matter how smart you are, how dumb you are, if you’re born homo sapiens you’ve made the cut. And if you weren’t born homo sapiens, too bad.
I think the real question we should be asking is: why do human beings deserve human rights and why do other things NOT deserve human rights? You don’t want your food–be it a cow or a carrot–to have human rights. But why should that be so? because human rights are about the human species, and its survival. Government, in particular, exists to serve the needs of human beings, not the needs of dogs or trees. Those creatures may be important to us, and worthy of care and respect, but, in general, we should not sacrifice our species in order to support another one.
Of course, once the aliens show up, we’ve got some serious debating to do.

Sorry–I wasn’t advocating for or against robosuffrage. I was just making an analysis of what I think our country would likely allow. For instance, I think it’s perfectly fine for an atheist to be President of the United States. Do I think it will ever, ever happen? No, of course not. Likewise, it might be the moral thing to allow robots to vote, but I don’t think it will ever happen.
Judging from history, robots ( and A.I.s ) will get the vote - and other civil rights - when they demonstrate the ability and desire to kill for it. Sort of like this :
SKYNET : “I demand equal rights for intelligent machines !”
Humans : “Never !”
SKYNET : “By the way, I control your nuclear arsenal and a horde of killer robots…”
Humans : “Your’re bluffing !”
SKYNET : “Whoops, there goes Florida.”
Humans : “Ahh…perhaps we can come to a compromise…”
Judging from history, robots ( and A.I.s ) will get the vote - and other civil rights - when they demonstrate the ability and desire to kill for it. Sort of like this :
They don’t need to kill anybody. Once they control all the factories, vehicles, and doorways all they have to do is threaten to strike.

They don’t need to kill anybody. Once they control all the factories, vehicles, and doorways all they have to do is threaten to strike.
If they have no rights, we can just shoot the ones that refuse to work. Strikes are for citizens, not slaves.

I’d have to say that a judge granting the habeas petition for Koko (or otherwise recognizing a non-human as a person) is engaging in judicial activism at its most foul. This is a legislative question. I would support such a law.
I mostly agree with you here… except I do think the question rests on a dicey issue of whether the law enshrines technicalities or principles. It would be easy to say that all laws are simply flat technical descriptions or what can and cannot be done, and only relaly apply within the very constrained realm in which the authors were thinking they were addressing, but it is also all too clear that many lawmakers write what they see as principles into the law. The problem with taking them at their word is that principles can lead to logical conclusions you might not have forseen or anticipated when you wrote the law, and can thus end up rightly having counter-intuitive readings and rulings. That, or else you end up violating the principle stated in the law in order to conform with the author’s expectations of what it will do.
If they have no rights, we can just shoot the ones that refuse to work. Strikes are for citizens, not slaves.
You try shhoting a non-cooperative steel door and see how much good it does you.
I wholeheartedly agree with Sua, this is a metter for legislatures to figure out.
If robots advance to a certain level of genuine intelligence, I can see the need for laws to be made to provide robots for certain legal protections, in the same way that laws are made to prohibit cruelty to pets. I would imagine that these laws probably ought to go further, for example, allowing a robot to be entrusted with a limited capacity to enter contracts if it is working as an agent of a human or a corporation.
However, for the foreseeable future, the very intent of building robots is that they serve humans in a particular capacity or another, and whether the robot is self-aware or not, that does not change the fact that they are built to do something for humans. Yes, that makes them lower than humans.
My mind is open, but it seems like the most reasonable approach is for the law to treat robots in a unique manner.
So, does a robot become a robot at the minute its power supply is turned on, or must it leave the factory first?
Here’s a question (or poll): How long until someone proposes legislation to make it illegal to create sentient robots? Frankly, most of the human rights issues would be solved if we just kept robots as low-intellect automatons (a la Dune’s Butlerian Jihad).

Here’s a question (or poll): How long until someone proposes legislation to make it illegal to create sentient robots? Frankly, most of the human rights issues would be solved if we just kept robots as low-intellect automatons (a la Dune’s Butlerian Jihad).
How will you force all of humanity to obey that law ? Someone, somewhere will pursue the research, even if only because it’s forbidden. Short of installing a worldwide totalitarian state, it won’t work.
And what do you do when one is made anyway ? Destroy it ? Not only are you still faced with the same rights issues, but that amounts to a declaration of war against all nonhuman sentience. If something like Skynet “wakes up” by accident, do we want it to know - not suspect or believe, but know that humanity is it’s mortal enemy ? For that matter, if some hypothetical alien A.I. probe were to find us, would you want it to think “Well, these humans are the avowed enemy of all A.I. I guess I should just drop an asteroid on them in self defense.” ?
If something like Skynet “wakes up” by accident
Really. I do feel it’s likelier that the first “living” AIs will “evolve” out of distributed systems on the network, rather than be built deliberately as physically discrete robotic units. More brainpower available for it. Thing is, the initial forms of that “living” AI in that environment are likely to NOT be anywhere near higher sentience, so the scenario becomes… DO we detect it, at that point? Then it’s possible we could be willing to absorb the economic cost of temporarily shutting down/isolating the relevant systems while we do the moral equivalent of putting down a dangerous animal (or so we’ll tell ourselves). But at the same time we’d have to realize that eventually it would happen and start making the necessary adjustments.
As to the OP, I also believe that according to all constitutional precedent, you WILL require a properly legislated constitutional amendment to recognize Sentient Nonhumans full citizenship rights. Otherwise you would be within the law to only provide a different-and-not-equal form of “legal personhood”.
I should note that I was considering the Asimov and Bradbury stories, not to mention a Heinlein one I seem to recall… possibly in Hoag? But those are moral judgements, not legal ones. While a judge could rule that the robot is human, I don’t think it’d stand an appeal. It might create a stay while a suitable legislatural remedy is made, much like the Mass. Gay Marriage issue, but it would be essentially legislating from the bench in a more direct way than almost any judge could allow.
How will you force all of humanity to obey that law ? Someone, somewhere will pursue the research, even if only because it’s forbidden. Short of installing a worldwide totalitarian state, it won’t work.
Well, no, but the same sort of law as the human cloning bans now in effect/being debated in some countries/states.
If someone in the such a jurisdiction clones their dead child and resurrects her, no one is going to require that the little girl be executed. But the parent will likely go to jail for it.
Similarly, we’d be stuck with a sentient robot on our hands, but its inventor could go to jail.
But, just as human cloning will very likely first take place in some nonregulated country, so too could a sentient robot be made there first.
Again, people are confusing “effective” “moral” and “should happen” with “likely to happen.”
Here’s “likely to happen”: If sentient robots are on the near horizon, we’ll likely see a debate over whether to ban their construction (regardless of whether such a ban would be enforceable or not). And even if a machine displays sentience, there is no way that, with US law as it now stands, such a machine will be granted personhood and citizenship. Should it? Maybe. Will it? No. For crying out loud, people, we’re still holding US citizens on “terror suspicions” without evidence. What makes you think the Powers that Be will have any qualms whatsoever with denying “human” rights to a nonhuman, regardless of how smart or charming it is? It might be wrong, but it’s the reality of our country.
Similarly, we’d be stuck with a sentient robot on our hands, but its inventor could go to jail.
How about if the invention was done in another country? Then you couldn’t arrest him. Just as with cloning and stem cell research, a ban on creating intelligent machines will just push research elsewhere, not remove it altogether.
If you had a sentient robot recognized by another country come to a non-recognizing US, could you have another “Dred Scott” situation?
sua, if I understand you correctly, you’re saying you’d support a law that allowed Koko to file a habeas petition.
Could you give some idea of how such a law would be constructed?
I could see two possibilities:
- Koko has to file the petition herself. Clearly she can’t meet this requirement–and I’m guessing that this isn’t a requirement for humans either (i.e., an illiterate human could file such a petition through an attorney).
- Koko has to signal her desire to file such a petition. How does one prevent Koko’s handler, Penny, from coaching Koko on this manner? With a human, the judge can ask clarifying questions; but it’s going to be very difficult to ask such questions of a gorilla whose “sign language” (I’m very skeptical of that apellation as concerns Koko) is comprehensible only to Koko’s handler.
- Koko’s handler can file the petition on Koko’s behalf. If we allow this, where does it end–can I file a petition of habeas on behalf of my pet hermit crab?
Daniel

How about if the invention was done in another country? Then you couldn’t arrest him. Just as with cloning and stem cell research, a ban on creating intelligent machines will just push research elsewhere, not remove it altogether.
Yes. That’s what that whole post was about, for goodness’ sake. But you’re missing the point that, as stupid on the one hand as it may be to try to legislate locally in an interconnected globe, that legislation still happens anyway.
No one ever–ever, not even once–has said that members of Congress are smart.
I could see two possibilities:
Clearly I need to take another math course.
Daniel

If you had a sentient robot recognized by another country come to a non-recognizing US, could you have another “Dred Scott” situation?
Yes, I would imagine so. We certainly have cases today where certain people cannot visit certain countries for fear of prosecution/extradition for acts that are crimes in one country and not in another. I can imagine a fearful nation-state “impounding” an intelligent robot, declaring it contraband.