Would it be immoral to turn off an intelligent, conscious virtual being?

This is the first part of a two parter question. Once I get enough responses to this, I will post the second part in a new thread (Each idea technically merits its own thread).

Assume that we will, sometime in the future, have the ability to create a realistic virtual world, where each of the beings in this world has it’s own consciousness and desires (such as love). This world’s society is completely separate from our own, they have developed their own religions and don’t know that they are in a simulation.

Assuming that these virtual beings have their own consciousness, and feel that they are alive, would it be immoral to shut off the simulation and kill them all?

Or, it doesn’t even have to be a virtual world. If we created a robot that could love, hate, laugh, philosophize and interact with society as if it were human (assuming it looked and felt like a human, and it actually believed that it was human and not a robot), would it be murder to turn it off?

We ‘turn off’ intelligent concious beings all the time. War, death sentences, etc.

Assuming said being meets the requirements for one of the above (Declares war on us, or murders someone, etc), then no, it would not be immoral, IMHO, to pull the plug. We would need proper justification, just as we do now when it comes to turning off people.

But to shut them off just to save a little on the electric bill or something, that strikes me as immoral.
But of course, how do we know that they are actually conscious, and not just mimicing us?..

Have you read either of these books?

THE METAPHYSICS OF STAR TREK (IS DATA HUMAN?) by Richard Hanley

THE AGE OF SPIRITUAL MACHINES by Ray Kurzweil

Both address this in depth.

Isn’t it irritating that we have to address this before we even have abortion resolved?

IMHO, it would not be murder on the grounds that, unlike a life, a virtual person can be recreated, duplicated, etc., and it will always be the same “person”. OTOH, a power failure could be their Armageddon.
Be wild, wouldn’t it, if the Qabbalah ultimately revealed that we’re but a program of G-d’s… YHWH script, perhaps.*
*I write sci-fi with phenomenal unsuccess and in one of my tales a 22nd century messianic figure who has united many of the world’s historic warring factions (Jews, Palestinians, conservatives, liberals, etc.) dies but is preserved through reinstantiation (the downloading of human consciousness onto silicon- currently theoretical, but so was cloning in 1950). A version is pirated through the “Tubman Virus” and distributed throughout the world with each group altering “him” just a little to more closely reflect their own views: soon there’s a Quebecois messiah, a white supremacist messiah, a lesbian messiah, etc., which causes enormous friction as each claims it’s the original and accurate, then comes a full-scale holy war when the Messiah’s clone is brought out.

(This is not the 2nd part of my question, just a response to you). Say you were to ‘enter’ this virtual world through whatever mindport they had, and these virtual beings did not know that you weren’t one of them. Would it be immoral to kidnap one and physically torture them for days in their own world? They wouldn’t know what was going on, they wouldn’t know that they could just be replicated again.

PS. I have not read those books, but I will add them to my list. :slight_smile:

This topic was somewhat explored in the movie “A.I”, in which a mechanical child is created that has human emotions. The moral implications of what our obligations to such a creature are were very interesting.

I think it’s a question of whether you feel it wrong to inflict unecessary pain on anything that has the ability to feel it. Some people don’t think it’s wrong to hurt animals because they’re not sentient, or don’t feel it the same way that humans do (which is a big steaming pile in my opinion.) Even if a being is only virtual, is the pain real? Does its agony mean any less because there are no legal consequences to us here in the real world?

I think this question is as pointless as asking `Is it wrong to hurt someone from a different town?’ If something can demonstrate intelligence and sentience, it is wrong to initiate force against it. Self-defense is always justified, however.

I agree with Derleth :eek:
If anything can demonstrate sentience, is pained by awareness of your steps towards its death, it would be wrong to kill it.

Definitely, but technological development outstripped moral development the the day Og bonked Oog with a rock, if not earlier.

I humbly disagree. If (when?) we develop the ability to clone a human being, we will be duplicating their original programming, but murder will still be murder. And this ignores the effect of environment, if we’re talking about beings that learn.

a good question. how do we know other human beings are conscious, and not just simulating it? we don’t. we see something that looks and acts like us, and we believe that we are conscious, so we attribute consciousness to it.

that is essentially the turing test. many people don’t think the turing test is sufficient, because it seems to attribute concsiousness so easily. they don’t realize we do it every day (see my thread on this and related subjects).

the question of the morality of turning off a “conscious” machine is a difficult one. i think that is because 1) consciousness does not necessarily make something irreplaceable and valuable to society, and 2) because we don’t know what consciousness is.

in my opinion, we must first answer these questions:

  1. what is consciousness?
  2. how do we know something is conscious?
  3. how closely does the value of consciousness resemble the value of human life?

for example, if we figured out that neuron fires, and causes neurons b, c, and d to fire, and poof the brain believes it is a conscious being, we might not consider consciousness so valuable. we would certainly be sure that it had nothing to do with a “soul”. we might not consider human life as valuable as a result. at least, we wouldn’t consider something valuable just because it was concsious.

do you believe that a zygote is conscious?

It is OK to ‘turn off’ an embryo, but not a concious machine.

for the typo pedants…
conscious

Kill 'em all and let DOS sort it out.

Can something be conscious and not be alive?

And, add to the mix the pain doll. When poked, this doll says, “Ow.” Is it immoral to poke the pain doll?
If so, is it also immoral to poke an AI that can say, “Ow.”?

if we knew what consciousness was, we’d probably be a lot closer to answerring that.

also, it depends on what you mean by “alive”.

then again, the life of something isn’t alone enough to make the act of destroying it immoral. just ask the grass i mowed a few months ago.

it depends on whether or not the doll feels pain, and is worth qualifying as something that we should not hurt.

in this case, i doubt very many people would argue that the doll does in fact feel pain. it’s not like you could ask the doll if something hurts. i can say “ow” without being poked.

Its my house, I pay the electricity bills and I`ll switch off whoever the hell I want to.

No, because it is not real. It is just virtual.

DNA seems to be quite definitive in figuring out whether somebody is human or not.