Would it be immoral to turn off an intelligent, conscious virtual being?

But unlike a clone, saving the file on a virtual being would include all of its thoughts and environment (which could also be saved to disk). It would be something to the effect of God (if you believe in him) having done away with all humanity in 1395, then a few billion years later saying “you know, I miss those crazy bastards” and rebooting the universe to that same day in 1395. Not even the people in 1395 would have a clue what had happened, let alone those of us in 2002.

So, if Neo kills an Agent, is that murder or self defense? If you look at it from their angle, he’s breaking and entering into the matrix.

Ooh… why is my head now filled with “Virtual Abortion”… MAKE IT STOP MAKE IT STOP!!! (Oh my Gates, the SIMS PLANNED PARENTHOOD program has just been virused…)

  1. on what basis do you make this claim? a cheek cell has human dna. are all cheek cells human?

  2. what does that have to do with what i said?

I go to sleep every night, and I would imagine that I’d approve of induced unconsciousness during heavy surgery.

I certainly do not have problems distinguishing a human from his cheek cells.

It seems that you were having difficulties telling who is a human and who isn’t.

I don’t think it would be immoral to turn off a virtual world that you created, because you are the ‘god’ of that world. You created it, so you can turn it off whenever you want to.

I created (or helped in the creation of) two children. Can I turn them off?

As for the pain doll, well, it’s good to see Al Gore getting work.
On a lighter note, can we avoid turning this into an abortion debate, if at all possible?

You may have helped create them, but you didn’t program them, or choose which combination of genes you wanted them to inherit from you or their mother. A robot is basically a computer, no matter how advanced it is.

Ah, but I am programming them. Without me to provide the proper input, they would not talk, read, eat on their own or otherwise function as humans.

But are only programming them after they are born. If you were one of the beings in the virtual world, you might be programming your children, but the creator of the virtual world has programmed you, so in a way the creator has programmed you to program the children.

“who is human and who isn’t?” wasn’t the question i was asking. that question was “what proof do i have that all other humans have the consciousness that i appear to have?”

i don’t think dna provides that proof by any means.

what about the people who are trying (and will likely eventually succeed) in cloning humans? are they the gods of their creations? they provide all the dna input they want, and can surely program their progeny after they are born. should they have the right to “turn off” their human creations?

IMHO, the “virtual world” question is in many ways more difficult than the “turning off Data” question.

I’m speculating a bit, but I think that when we create intelligent machines, they will be created purely to serve humans; won’t have any emotions or feelings such as pain or fear; and won’t care one whit whether they live or die. I don’t think it’s inherently immoral to turn off such a machine.

On the other hand, if we create a virtual world and virtual beings evolve there, those beings will be in many ways similar to humans. It would seem wrong to me to enter such a world an cause harm to such people (like in The Thirteenth Floor).

As someone said in the debate over animal testing, the question is not whether animals can think, but whether they can suffer. IMHO, an analogous standard should apply here.

Yes. i think so.
there’s a difference between awareness and self-awareness.
smaller animals are aware, ie conscious, but not self-aware.
when the neural net of a brain grows to a certain size, it get’s 2 new attributes, among many others, #1 secondary emotions, #2 self-awareness.
secondary emotions include guilt, shame, jealousy.
but most importantly, it gets a sense of self, it knows what it’s doing and it can reflect on it’s emotions.
if we made a robot that would have all this, i would say it’s immoral to ‘pull the plug’ on it unjustified.

if it had the attributes of a smaller animal, like a squirrel, then i guess most people wouldn’t think too hard about killing it.

No, because the creations still have human DNA. The people who created them didn’t create the DNA, they copied it.

Imagine that the AI in video games keeps improving. If the artificial intelligence of a character in a game is sufficiently advanced, would it be murder to kill that character? What if you kill them, then reload an earlier save-state and go on as if nothing has happened? What if the character you killed was a bad guy who had killed other characters?

Unfortunately, this issue is indeed intimately connected to abortion–it is the fundamental moral question of which entities have a right to exist. If someone thinks abortion is wrong, then they probably think that biological human beings have the right to exist, even if non-sentient. If someone thinks that killing a virtual being is wrong, then they probably believe that sentient beings have the right to exist, even if not biological humans. In practice, I suspect the two positions are usually mutually exclusive.

When people refer to ‘turn off’, I assume this means wipe the memory, rather than turn off the power indefinately. Isn’t ‘persistance of memory’ the thing that is treasured by a sentient being?

An artificial AI may be turned off and the memory saved. When the machine is restarted the memory will remain. The AI has not been terminated. Turning the power off would be akin to general anesthaesia.

True death would be destruction of all memory.

The pain the AI would feel is the awarenesss that it would loose ALL its memory.

Wiping the memory of an AI that was aware enough to feel the pain of this loss would be immoral.