Would it be immoral to create virtual people who suffered?

There is a thread in GQ about this NY times article, in which the probability of our being characters in a computer simulation is discussed.

Some relevant quotes from the article:

I’m interested in the morality of this. Is it wrong to create make-believe people and not make them as happy as possible? I tend to think it is wrong, but it’s kind of fuzzy for me. Thoughts?

Well, “virtual people” != “people”, so… no. Calling a computer construct a “person” doesn’t make it one, any more than swatting flies would become worse if we changed their name to “baby seals”.

Not so fast…
If you create a conscious entity, than I think you need to show some compassion towards it.

You’d have to be able to prove it was a conscious entity and not merely a convincing simulation of one.

Quite seriously, what evidence is there that it is possible to synthetically create a conscious entity when we can’t even adequately describe consciousness?

So those of us who locked our Sims into rooms and made them starve to death/made them die of exhaustion in the pool/electrocuted them to death…we’re actually going to hell over that?

How can we prove that anything is a conscious entity, and not just a convincing simulation?

That’s probably the type of thing our programmers didn’t want us to know, eh?
LilShieste

The problem with that is that you can apply the exact same standard to other flesh and blood humans. For that matter, it HAS been applied to small children and animals, with claims that they didn’t really feel pain no matter how they screamed and struggled.

I can’t prove that a simulated person is conscious, but I can’t prove that you are either.

Simple; we exist. Just as birds proved that heavier than air flight was possible long before airplanes, the existence of natural consciousness shows that it should be possible to create an artificial version.

As for me, I feel that there’s no moral difference between a natural or artificial creature.

At its most basic, suffering is nothing more than a signal inside our head telling us that we’re missing something that we should try to attain. So unless all of your virtual people had everything they ever wanted, or an inability to determine what is good or bad for themselves on an individual basis, then you’re already in a corner. And if they do have individual personalities and wants, then there’s no guarantee that even if you give one of these virtual people everything they ask for that they’ll be satisfied–people in real life often aren’t.

Heck, by the basic definition of “suffering” any number of programs I’ve made where there was some sort of decision process where it checked to see if there was a state that it didn’t like–the existence of that chance means I’ve already created programs that suffer. They just didn’t have enough brains to care beyond going on to the handling code I’d given it.

Or, to give another example, if I have a virtual person who will suffer if he can’t slap a second virtual person upside the head, then again there is no solution to the problem. If I stop him, then he’ll suffer, and if I let him proceed then the second person will suffer.

Welcome to life!

All the more reason for an arbitrary but widely acceptable biological standard. If it has human DNA - it’s a human. If it has dog DNA, it’s a dog.

(oops, hold on)

Wrong for who? If you’re playing God, you get to decide your subject’s morality. Nothing for them is right or wrong unless you say it is.

Or for you? Are you worried that your coworkers in the sim lab will think you’re a bad person because you’re making unhappy sims?

Hmm, if you and your coworkers are still relatively normal humans by that point, I’d think most of them probably wouldn’t care all that much. History suggests that people can conveniently categorize groups of living creatures into “us” and “them”, and if “we” need to do something we don’t really like to “them”, “they” eventually become objects that we don’t waste compassion on. It happens in wars, it happens in animal labs, in the world of eating animals, etc. There’s no reason to think it wouldn’t be the same in a sim lab unless your sims display just enough human-ness to trigger instincts within you that make you think, if only for a sec, that “they” are part of “us”. Then it gets uncomfortable. Then you distance yourself and tell yourself it’s just a job, then you start ignoring their cries… then you just stop caring and move on to other, less stressful jobs while leaving the interest of your sims in the hands of Big Sim versus Simactivists. Morality probably won’t even factor into it much.

Beyond that… whether it’s right or wrong just depends on the system of morality you subscribe to.

Trurl (or Klapaucious ) has to deal with this in Stanislaw Lem’s “Cyberiad”. Not that it was serious or methodical or conclusive, but it was interesting.

I actually don’t know anything about SIMs other than it’s some sort of video game that has something to do with making alternate worlds. (Is that right?) This question is about more advanced technology than that, as described in the NYT article, where you have an artificial consciousness created by humans. These hypothetical artificial people don’t know they are artificial…their suffering and happiness is just as real as any person’s. My opinion is that it’s morally wrong for me or any other person to create a conscious being and deliberately make it suffer. As far as I know, SIMs people aren’t capable of consciousness.

Oh, sorry, I just meant “sim” as in “simulated person”, or what you’re describing. Not the video game. The basic concept is the same, but your sims are just much better-simulated, even to the point where they appear conscious.

The same thing still applies. If they’re happy, it’s only because you chose to make them happy. If they’re unhappy, it’s still your fault. If they’re bothered by morality at all, it’s only because you programmed it into them. Whatever system of morality they have (or don’t have) is up to you.

If consciousness can be infinitely reduced to lines of computer code, then sims are nothing more than computer programs. At that point, a sim’s happiness is no more important than the happiness of a program that just repeatedly puts up a message box saying “I am sad.” It’s the same thing, just cluttered by more lines of code. It’s like asking “Is it wrong to make Microsoft Word unhappy?”

Oh. Well, in that case…

I’m still more concerned about the morality of creating miserable sims rather than the sims’ personal morality. Is it okay to create unhappiness in artificial conciousnesses (and for the sake of the thread, these sims are actually conscious, they don’t just appear to be).

Why? If I created it, isn’t it mine to do with as I want?

Sorry, I’m not being very clear. I understand what you’re saying. Yes, the sims are just as conscious as you are.

What I was trying to get at is that there is no “universal code of morality”. What you consider right and wrong is only considered right and wrong because of your cultural, societal, religious, etc. background. You believe something is right and wrong because somebody/some experience taught you that, and somebody else taught them that, and it goes all the way back to either a deity of some sort of a society of some sort.

Stuff like killing another person isn’t inherently right or wrong; it only becomes one or the other when a system of morality examines it and gives it a label. And in this case, most present-day systems of morality aren’t equipped to answer your question, probably because artificial life was pretty rare when they were were first drafted.

So we get to decide for ourselves. The same way we get to decide for ourselves whether it’s right to fuck and make babies in a world we know is full of pain and suffering. Whether that’s evil will only depend on who you ask, so it’s pretty much an individual choice affected by the individual’s particular system of morality, religion, or culture.

Is it okay? It’s up to you and your society to decide.

I think “suffering” is an unavoidable part of having a rich and full life. In the current world, people suffer because some of their needs or desires aren’t met. They’re hungry, they’re not safe, they can’t find love, whatever. But then if we made a perfect world in which every child was born into a pristine white box, fed and cared for their entire lives and given a life partner, they may still very well suffer. Innate drives of curiosity may seek them to break out of their box and explore the outside world, and inability to do so may make them frustrated and then they’re suddenly suffering. Then we give them a TV set to feed their curiosity, and now suddenly they’re hungry for more and even more unhappy. And so on. Or even if you give them a rich and perfect world, they’d already have everything and there’d be nothing left to achieve, learn, hope for or dream for. They’d all be Gods and there’d be nothing for them to do because they would have no needs or desires to fulfill. They still wouldn’t be happy. Suffering is relative, but everyone makes it for themselves one way or another.

The only way to create a never-suffering sim is to simply make that sim incapable of experiencing unhappiness, but then it cease to be human-like because it would lack basic human drives. And at that point, the question becomes “is it better to be a species that can suffer or one that cannot suffer?”, something we may never be able to answer until we can experience a suffering-free life for ourselves.

Alan Turing’s argument, of course! If you can’t prove other people are conscious but assume they are, then a ‘simulation’ so good you can’t distinguish it from other people must also be assumed to be conscious…

Taking the SIMs video game as an example, suppose one of your SIMs starts communicating directly with you, writing messages on the SIM floor “I’m starving”, “please stop hurting me,” stuff like that. At first you think it’s some programmer’s trick. But you find you can communicate back - the SIM can “hear” any messages typed with the Prt Scrn button held down. So you play with it for a while, and the longer you converse with this little entity, the more convinced you become that it is somehow genuinely self aware and rather confused.

At that point, my personal morality would dictate that I should do my best for this entity, which has little power or control over its universe. I think I’d have as much responsibility towards it as towards another human being who by an accident of circumstance, became highly dependent upon me. Eventually I’d try and put it in control of its own universe as much as possible, so my responsibility would diminish to keeping its software backed up and the computer running.

If more of the SIMs also became self aware and they started giving each other a hard time, then I’d really have some problems on my hands!