The Simulation Problem (Ethics and computer simulations)

Well that’s a PC response. But let’s get real here. Don’t we just love them because they’re cute? Also, the argument that intelligence should be a factor, is a rather slippery slope in my opinion. Although I will say we are naturally inclined to be more empathetic to intelligent creatures.

Unless they are revolting, like say the octopus. Who happen to be fairly intelligent relatively speaking. I don’t see people coming out in droves to save them.

Perhaps the simulations will explain it to us, before I pull the plug on them.

It gets a little complex, but I do have qualms about creating simulations like this in the first place. But once the ball gets rolling, some simulation will eventually be able to give itself or some other simulation sufficient similarity to humanity that the problem will arise. But at the same time, those simulations should realize that they are only simulations, and can be recreated (if they were diligent in doing their backups). So I definitely don’t think the suspension of a process which can be restarted to be anything resembling murder.

I think you have misunderstood what I’m saying - I am not saying and have never said that ethics is objective.
What I am saying is that if we are to adopt some sort of ethical rule by which the intelligence of a creature determines its right to live, why apply that rule to carbon-based physical forms and not to potential electronic forms?

It’s not about an objective system of ethics, it’s about a consistent system of ethics.

By those standards, it would be unethical. Under the construct that you describe, I don’t see how one could debate otherwise.

But what if you don’t restart it and never have any intention of restarting it? Or you turn it off and lose the files necessary to get it running again? The inhabitants are permenantly lost as effectively as if you deleted them in either case

Well I personally think Octopus’ and Squid look, and are, kind of neat but that’s just me, I also don’t argue that dolphins are cute in a way tuna aren’t but I stand by my assertion that intelligence levels are a good rule of thumb on how well we should treat individual species.

I grew up on a farm, I don’t have a rose-tinted view of animals, but I also don’t believe in unneccesary cruelty and we should try to minimise suffering because its the right thing to do.

Still no biggie to me. There may be people this bothers, but I’m not seeing it.

BTW: I see from another thread that you’re not a computer person. I wonder if that aspect affects people’s opinions on this subject.

OK. Perhaps I missed the point of your objective vs. subjective ethics post beneath my quote. Was that directed at me?

Quite possibly, I recall having a real-life discussion with a friend (how very twentieth century) who works as an IT manager about artificial intelligence, he found the very concept literally laughable and said it would never happen, even with new technologies or techniques, for him it was conceptually unthinkable.

He may very well be correct using classical computing techniques (and he convinced me on this to a large extent) but I thought it was rather narrow-minded of him to flatly assert that it was still impossible even with perhaps unforeseen new methods of approaching the problem.

But I’m not sure whether computer expertise or otherwise bears that much weight on a subject(s) that are currently in the realms of mere speculation and are perhaps more philosophy than science. We’ve been surprised in history more than once.

Yes. I’m agreeing with you. (I think) I don’t think carbon based lifeform should be a requirement for having a right to life.

That includes electronic lifeforms.

Stanislaw Lem addressed this in one of the stories in Cyberiad. IIRC It was entitled The Kingdom Of The Micro Minians OR The Problem Of Perfection.

Full spoilers follow

[SPOILER] The inventor and craftsman Klapaucius (or it might have been his friend Trurl) find a king who has been exiled to an asteroid. The king is lonely and sad. Klapaucius feels sorry for the king, but knows the man is too evil to rescue from exile. So ,being an inventor and craftsman, he fashions a tiny simulated kingdom. He shows the king how to work the controls and then departs.

On arriving home, he tells his friend Trurl about his day. Trurl is furious. He and Klapaucius have an argument about whether the inhabitants of the tiny kingdom are real sentient beings or not. Trurl finally convinces Klaupacius. He sees that he has left real (though tiny) people to be tormented by a monstrously cruel tyrant. The pair race back to the asteroid.

There, they find that the tiny people (Micro Minians) have escaped the box of their kingdom, colonized the asteroid, and killed the king[/SPOILER]

Star Trek has explored this theme a bunch in Voyager and in a few episodes of Next Gen and DS9.

No. A roomba can learn my house layout and has enough self awareness to know when its tummy is too full. That doesn’t mean that I’m going to name it and extend special efforts to allow it to live as long as it can. I have no problem letting the battery die and making it start from scratch in it’s next life… and even worse… I might buy an upgrade and just throw the roomba into a pile to scavenge electrical components from it.

An all carbon life based comparison would be the people who go to great efforts to rescue mice, rats and rabbits. All of them run wild on my farm, I cannot for the life of me imagine filling out an adoption application for a mouse, rat or rabbit. I don’t begrudge people who feel called to such rescue efforts, but they are never going to convince me that it is worthy cause and it will never rise to an ‘ethical’ issue for me.

I have Windows 286 on diskettes around here… have I killed that application off in some ethical way? After all, there are self monitoring and automated processes even in such a crude product. At what point do we declare an application worthy of ethical consideration? Should we set the bar at the point that it expresses a favorite color? Or identifies with a favorite rock group? Who is to define what is enough intelligence in order to deserve ethical consideration of terminating the process?

I will only agree to this when the local PD starts prosecuting all the crimes that happen in gaming systems. After all, are there not electronic entities being maimed and murdered in those games? Aren’t possessions being stolen, damaged and destroyed? When gamers are prosecuted in real life for their actions in their games… then we can start talking about extending other rights and obligations to computer processes.

I would rather spend my time championing the rights of the lowly earth worm (that provides a significant and vital activity that enables the billions of people on this planet to eat) before I would even support the even the most minimal of protection for a computer process.

And, before you start issuing intelligence tests as a standard for protection. How will you rate the average honey bee that provides pollination for a significant percentage of food crops. Bees can’t exactly give a dissertation on their dreams and desires, but most estimates are that without them, our food supply would be significantly hampered in a very short time.

Oh come on. The characters in video games are NOT autonomous. They are controlled completely by the player.

That is a far cry from what the OP talks about.

Any entity that can independently, with no control by another, hold an intelligent conversation, express self-awareness, possess hopes and desires, as well as a desire to continue its own existence, none of which were directly programmed into it, but instead were the result of its learning and response to stimuli, is inherently indistinguishable from people, as far as whether that entity is a sentient life form and a person or not.

I find it difficult to understand people who can make such an arbitrary distinction in their minds to say that a form of life that differs from their accepted norm is inherently not a person, despite displaying all the mental qualities which make a person. It seems like picking out an aspect of an individual, and denying all other evidence of similarity or capacity based on that one aspect.

Oops. embarrassed Sorry - I misread you as disagreeing with me. :smack:

There is no reason to think that any level of complexity, per se, is sufficient for self awareness.

Okay. Is that contradicting something in the thread?

Somehow this description of an entity clearly incapable of anything resembling sentient thought does not tell me anything about your ethical attitude towards entities that are capable of sentient thought.

Could you address the thread topic, please?

I do not believe that electronic (or even electro-mechanical) processes, no matter how complex, self-aware or sentient, are a life form.

As a result, turning them off does not rise to the level of an ethical question and it certainly cannot be called murder.

Is that statement clear enough?

Suppose you discovered that a good friend of yours was murdered by someone. Then, you discovered that your friend was actually an android all along and not a flesh-and-blood human. Would you no longer be upset?

I know it might be tough to visualize because there ARE no simulations that currently approach those of an actual human, but in theory I think a sentient simulation is much more valuable than many, many types of life forms - so destroying such a simulation IMO is worse than destroying said life forms.

At the very least, destroying the simulations would be equivalent to destroying art - perhaps a famous painting could be redone, but would it really be the same? Likewise, if the simulation was re-run to roughly the same point, would it really be the same? And what if the simulation CANNOT be re-run again?

I ask this because I genuinely do not understand- why exactly do you believe this?

IIRC We learned in biology that all living things

grow

eat and excrete

breathe

move

and reproduce
If an electronic mechanism could do all those things, why would it not be alive?

Further, to any atheist homo sapiens sapiens are just electrochemical machines. Yet, no one would deny that we are alive.