The Simulation Problem (Ethics and computer simulations)

Inspired by Inigo Montoya’s thread: You are a god, how do you pass your time? - In My Humble Opinion - Straight Dope Message Board

And also by an issue raised in Iain M Banks latest sci-fi book ‘The Hydrogen Sonata’.

The scenario is that as computer processing power increases so does the complexity and fidelity of simulations, in theory the point will be reached where the individual personalities (in a simulation of a particular society for example) will become so detailed that they will be as sentient, self-aware and alive as those persons in physical reality.

At that point ethical questions arise, is it genocide to shut down such a simulated reality when the point of the test has been reached, should it be left running with resources devoted to it indefinitely. If the latter case should the simulated inhabitants be told they’re living in a simulation or is it better that they aren’t told?

btw this question is actually something I’ve independently come up with myself and mulled over before, I’m sure its far from original but until I read Bank’s new novel I’d never seen it discussed elsewhere before.

bttw In the Bank’s Culture universe its mostly agreed by civilised peoples that scenarios won’t be run at such a high level of complexity that the ‘inhabitants’ are self-aware and if they are then resources are devoted to keeping the simulation running. Civilisations barbaric enough to simply switch the scenarios off are frowned upon and not invited to the best parties…

You’re describing a computer simulation right? No carbon based life forms involved?

If the ‘inhabitants’ become aware that they are in a simulation, then the simulation is no longer useful. Once to information objective of the simulation is obtained and it has no stated purpose, then save it off to storage and power it down.

Why would you waste electricity maintaining it for no practical purpose?

I don’t know.

I think the computer simulations would be of a deterministic nature.
They would be incapable of free will.
Perhaps that’s a necessary property of being alive.

Do you have any evidence that we have free will? Or that free will is even a meaningful concept?

And then as you reach for the power switch, a sourceless voice announces ENDPOINT OF ETHICAL CHOICE SIMULATION 1493854 REACHED; TERMINATING SIMULATION.

If I don’t have free will then don’t blame me if I’ve turned off the simulation.

A society capable of creating sentient entities within a simulation is probably not worried about their electric bill. At least, I hope they aren’t.

The idea of wasting resources is the thing - you don’t get anything more out of the simulation, there is no longer a “practical purpose,” but is the ethical drive to allow sentient entities to maximize their length of existence sufficient to leave the simulation running indefinitely?
Pardon me, I’ve got to go talk to a fellow in an airship about an extra pair of arms…

Not sure I understand?

Are you saying the simulation is crashing? Do I have a backup, then no problem. I have run Windows Servers… so 'tain’t my first rodeo there.

OR Are you saying that the simulation has become a virus and invaded the network beyond the allocated processing stacks? Well, then it is a virus and needs to be contained and eradicated.

OR Are you saying that I am the simulation and ending the game by pulling my own plug? Well… so… it isn’t like a powered down program is going to sit there and fret about anything. Think about it, if you are a computer program and you do power yourself down… what exactly is the problem? Or do you think some little cluster of electrons will continue to exist and actively process some form of regret? No, powered down is powered down.

Where we differ is that I don’t see how you can call a digital resource or a running process a sentient entity. I pretty much require at least some carbon based life form before I start viewing something as sentient or an entity in the way that you’re using the term.

But, I will admit that I know a lot of gamers who do not make much distinction between real life and their games. I could see them keeping it running because in their mind they equate it to some pseudo-real life environment. But I still can’t see it rising to the level of an ethical question.

This is interesting. This society is having an ethical delema in storing and powering down a process, but wouldn’t see squandering the electrical resources required to run the system that runs the process. I’m pretty sure that this highly developed environment will not be running on a laptop (at least not in the early phases). Let’s say they just need processing in the petaflop range… that is still a whole lot of energy to run the system and the HVAC that will be needed to maintain the system.

The processing power could actually be re-purposed to curing cancer or predicting the next major weather event…
Or just the power could be redirected at growing incubater bacon so that real pigs no longer need to be raised and slaughtered as a source of protein… or it could be used to power greenhouses used to create the next best pot hybrids so that all of California can float around in utter bliss.

But no… instead, spend the power required to run the program on keeping the digital equivalent of a terrarium for no other purpose than to say it exists.

Pull the plug. If you have qualms, call me, I’ll do it. It’s a machine.

I’ve often pondered these ethics in the Star Trek universe.

That hologram doctor. They know he’s alive yet they have no qualms about mass producing him to do menial work.

Doesn’t sound very ethical to me.

As far as the simulation goes? Well, basically, YOU are God. Their lives are insignificant relative to yours. Just like ours would be if there really was a God.

But hey, if it makes ya’ feel better, you can always hit the pause button and download the simulation to a memory stick. Technically you haven’t killed them. Because when you hit play again; they wouldn’t be the wiser.

Quantum mechanics sure does make me wonder sometimes if we are living in a simulation that was somewhat poorly written.

By the terms of the specified scenario the simulations have become so advanced that the inhabitants of the simulation display as much complexity and self-awareness as inhabitants of the real physical world. You wouldn’t just be turning a machine off, you’d be killing people.

The simulation could be something like The Matrix as depicted in the movies but with the characters having no external physical bodies (although when I think about it it was only the humans who actually had a physical body, the AI’s didn’t)

But then I’m of the same opinion on theoretical future AI’s, if it looks, sounds, acts and appears to be self-aware then it behooves us to treat it as such. If we’re going to create a simulation with apparently self-aware inhabitants then we’re morally obliged to keep it running, or don’t create a simulation of such high-fidelity in the first place (and accept the trade-off in decreased accuracy).

Given that we don’t understand what makes something sentient or not, it’s not beyond the realm of possibility that a sufficiently complex digital system would be indistinguishable from a living being - even though we have not yet observed any such entities.
Let’s imagine you have a conversation with such an entity in which you set some Turing-test type criteria that the entity passes. Does that change your willingness to turn off*/delete/reallocate resources from the simulation?

*Noting that electrical power may not be needed to maintain the simulation, but supposing there is some resource that is.

Doesn’t that just dodge the ethical question rather than answering it?

Why are their lives insignificant next to yours? Merely because you can turn them off?

Good point. :slight_smile: But if you turn it off and never turn it on again, haven’t you killed them in a manner of speaking?

Go watch The Thirteenth Floor and then report back.

People have bodies. Simulations don’t. They’re not people.

One philosophical question that often gets asked: Why is it ok to kill a tuna but if you kill a dolphin you’re a terrible person? (Or some variant of that question.)

It’s OK because we, as humans, supreme rulers of this planet have decided it’s ethical. We have an emotional investment in dolphins and not so much with tunas. Ethics is subjective, a man made concept. It is not a universal constant like say gravity or thermodynamics.

So, with that in mind, we can safely say, killing the simulation would not be a violation of ethics assuming nobody in the real world has an emotional investment in this simulation.

However, the opposite would be true if we loved those guys in the little computer box.

I’d like to revert back to the Star Trek universe. If you step through a transporter; haven’t you essentially died and come back to life? Even if you did, does it really matter?

I think we have fundementally opposing viewpoints on these issues and are never going to convince the other. That’s not meant to be snarky, just an observation.

Well a dolphin is much closer to a human on an intelligence level than a tuna, for that reason alone we should value them more.

And I think it would be wrong to to terminate the existence of intelligent and self-aware individuals whether we have much emotional investment in them or not…although gathering how attached people seem to get to a simple cube with hearts on it in a video game (Portal) I don’t think empathy would be a problem!

I agree. I’m open for discussion, but I’m pretty well set on this.

I’ve read/participated in quite a few discussions on similar topics over the years and I think its interesting in itself how people seem to have an almost intuitively formed opinion on the topic which seems self-evidently correct to them but blatantly and obviously wrong to the other, ultimately resulting in both sides simply looking at each other with mutual incomprehension.

I’m sure there’s something deep and meaningful operating in regards to that but I have no idea what!