I don’t think that a child has the right to live by virtue of his biology. Were that true, a human heart removed from a body (kept alive, such as for transplant) would have the right to live, because it has human biology. Pshaw, I say - mind, sentience, intelligence, and awareness are all that matters. (I can handle the fact that this prevents the pro-life/pro-choice issue from having a pat answer.)
You have to be a little careful here; I can program a 20-line program right now that will do this - Have it show you a list of four or five questions to ask, and whichever you select, it replies with the sentience-affirming answer. However, I’m not thinking that that would count.
Now, if you have a computer that can manage a prolonged turing test of suffiecient depth and probing, then I might start to believe that it’s conscious and sentient - especially if it demonstrates the ability to learn and develop its intelligence. At that point I would consider granting it the label ‘person’.
Sure, sure. Though, a purely virtual entity might be erased if you turn the machine hosting it off - and alternatively there might backups and concurrently running copies and all sorts of other zaniness involved. Which does complicate the question a bit…would it be ethical for an entity to make a copy of itself and then send that copy into mortal danger, secure in the knowledge that the surviving copy would carry on?
First of all, Turing machine doesn’t mean what you think it means. It has a very specific definition in computer science. You actually mean a computer that passes the Turing test. Turning machines, in general, wouldn’t.
I haven’t bothered to give links. If you care you can look it up just as easily as I can.
That out of the way, passing a Turing test implies intelligence, and not necessarily humanity, unless you are defining humanity to mean intelligence, you specieist you. One assumes our hypothetical intelligent ET could pass a Turing test and still not be human. Does distinguishing humanity from intelligence change your question any? Do you agree that intelligence, not necessarily humanity, gives an individual certain rights?
As for free will, I have a request. Can people on either side of this issue tell me whether a determined but unpredictable action counts as being done through free will? Consider a pachinko machine. If you record the path of the ball, you can easily say it went to the left or right of a particular pin based on its exact position with regards to that pin, and that it had a certain spin and velocity on launch, also measurable after the fact. However it is also impossible to predetermine the path. I hope the analogy is clear.
I think not; once it starts running on it’s own it diverges; at that point the copy becomes a second person. A greyer area in my view is what happens if they don’t break the connection; if the two ( or more ) copies share perceptions and memories, and thus never actually diverge despite being in two places at once. I think you could argue that it could morally send off a copy to die in that scenario, because the copy and the original are essentially one being despite being in two places.
I wouldn’t. My brother, on the other hand would - but note that he has no problem calling what computers do when deciding on an action “free will”; he just defines it as the ability to choose even via deterministic processes, which most people don’t apparently.
In response to your request - free will is really poorly defined/understood, and nobody agrees with anybody about what it is exactly. So you’d have to tell me which definition you meant.
And, aren’t ‘determined’ and ‘unpredictable’ contradictory, by definition? Presuming you meen ‘unpredictable’ in an objective, unlimited-information sense, that is.
If it helps, I can’t think of any definitions of free will were randomity would add to free will. Unless you’re defining free will = unpredictability/randomity, which is just silly.
Well, clearly if the copy didn’t want to die and the first made him go anyway, it would be immoral. But what if the copy went to die willingly, essentially committing suicide for the benefit of the ‘tribe of me’?
Wouldn’t that be equivalent to being able to grow spare arms, and then sticking one of the spares into a fire? That wouldn’t be immoral at all. (Hmm, I’m getting flashbacks of Ender’s Game.)
The ability to demonstrate intelligence has not been a good measure in the past of what is alive or dead - which is, in part, IMO, what this conversation is about.
A child born with severe developmental delays has more innate rights than an animal, even if that child was born as an empty shells, in a brain-damaged vegetative state.
A baby born with anencephaly cannot be just shot up with pentobarbital and tossed in the trash like my cat could.
If we just judged by intelligence, and given the range of intelligence that people have (from severely retarded to Marilyn vos Savant), then my cat would have more rights than the Terry Shiavo’s of the world (and she doesn’t).
But this appeals to a prior ontological judgment that the machine wants not to be destroyed. The question is, does something that behaves as if it doesn’t want to be destroyed (regardless of whether it really does want not to be destroyed) appeal to your values as something that should not be destroyed?
If you think “behaves as if it wants X” is just the same thing as “wants X,” please see my first post in the thread. There I try to highlight the distinction between the metaphysical and epistemological questions people are asking about this kind of topic. People often confuse them.
In that my behavior can demonstrate them, yes, but they’re more fundamental than that. It’s a matter of self-awareness.
In this case, since I’ve presented this machine as another being, separate from you, the only way you now it’s self aware is by what it’s saying or doing. So I’m saying that of course it’s going to tell you that it’s self-aware, sentient & conscious - just like any other human would.
The same way, I guess, that you can be reasonably sure right now that I"m not a machine. If all I am is the text you’ve been reading (which is basically the case), would you think I’m conscious & sentient from the way we’ve been conversing via this message board?
I think you’re a real person, in part, because I am working under a presupposition that there are no turing-test-passing robots navigating the web right now.* Similarly, I’m acting under a presupposition that it’s not true that, as you put it, all you are is the text I’ve been reading. This presupposition is natural, and hugely practical.
Once you throw a turing-test-passing robot into the hypothetical, though, all presuppositions are off, so to speak.
-FrL-
*Not that this ever occurs to me consciously. But neither does “there are no robbers in the area” yet I generally act under that very presupposition.
I’m just starting from the premise of question as it’s given here. If the machine can pass a Turing test then it can be asked if it wants to be destroyed or not. If you want to change the problem to elicit these deeper questions then I suspect we’d agree on the difficulties.
Actually, it can’t, at least not completely (and as for that, humans can and do have an incomplete understanding of our own thought processes). It was, in fact, Turing who proved that a machine cannot tell whether a program it’s running will eventually terminate or not. If the machine in question is finite, then one can in principle construct a larger machine which can tell, and which therefore might be said to “understand” the smaller machine, but then that larger machine would still be unable to determine everything about its own programs.
That would be the same as any other volunteer for a suicide mission. I was thinking more in terms of transmitting the original data and assembling the copy on the spot.
It was A Planet Called Treason aka Treason that had the extra arms. And yes, it’s similar. I’m thinking in terms of the question of when different parts of a distributed intelligence would qualify as morally separate individuals.
The OP asks these questions, doesn’t start with an answer to them as a premise. For example:
I do see that at one point the OP says the machine “has an opinion,” but everything else in the OP is careful to avoid the implication that the machine, by hypothesis, necessarily does have sentient mental states, so I really think this was a slip. The overall idea of the OP seems (as per the quote above) to be to ask whether simply passing the turing test is sufficient for sentience or not.
I am saying I presented, I thought, a pretty clear hypothetical - if there were an artificial intelligence that, in behavior, was indistinguishable from a human, then should it be treated as human with the same rights and responsibilities as a biological human?
Since it was sparked by a thread on religion, this thread to me is really about whether a soul is necessary to be human or can it be humanity defined by intelligence & behavior & other non-spiritual properties.
I was trying to leave the “soul” part of it out because I really wanted to explore the behavioral aspects of humanity without devolving into another religious spat.
I don’t think it’s a false dilemma to explore this concept with this hypothetical. Your two options are “Human” or “Not Human”. If not human, I asked in what way it isn’t. That leaves room for you to say “AI but not human”, removing what you seem to think is the false dichotomy.
Just stopping through the thread to say “False Dilemma” is, in my opinion, just a way to leap into the thread, toss in a term learned from Philosophy 215, and jump out the other side leaving a sparkling trail of superiority.
Even given the wild assumption that they exist, we have no evidence that souls actually do anything, so no they aren’t “necessary”. If souls exist, we can’t even tell if all people have them.