Morality of Simcity type games

Ok, now I’m a big fan of “god” games like Simcity and Civilization as well as “real-time strategy” games like Age of Empires, Command & Conquer and the like. I also realize the AI, while sufficient to simulate commuters or clashing armies, really isn’t very sentient by any stretch of the imagination.

So how about in 50 years or so when we can create games/simulators with some rudimentary form of AI? The Sims (to borrow the Maxis term) experience some level of emotion make the gaming experience more realistic. They fear explosions and gunfire. They get pissed off sitting in traffic. They seek out things they enjoy. They experience indication of damage [ARNOLD T-2 VOICE]Vhat you hu-mans vood cahl paihn[/ARNOLD T-2 VOICE]

Is there a point where it would be “cruel” to set armies of self- aware virtual intelligences against each other for our amusement? Would the Command & Conquer 2500 experience be more or less entertaining if PTSD was a factor or if wounded virtual soldiers started crying for their momas while being ground under a N.O.D. tank?

Basically, if we create an AI that is indistinguishible from a human intelligence (or even a relatively smart animal intelligence) is there a point where it is no longer appropriate to use it as a soldier, acid mine worker or for any other purpose that we would consider cruel for a living intelligence?

(outcome of thread will not effect the fate of the residents of Simmsmithtown)

This debate reminds me of the big flaw in Shelly’s Frankenstein, as pointed out by Stephen King in his book Danse Macabre. The doctor makes a mate for the creature, but once he imagines the two mating and starting a race of creatures, he is horrified and destroys the female.

King points out that it would have been child’s play for the doctor to simply to make the mate sterile; indeed, it would have been difficult work to make her fertile.

My point (I have one, believe it or not) is that once computer science is sophisticated enough to make computer characters that border on the conscious, it would likely be easy enough to make it so that they suffer no more than an actor suffers filming a death scene.

Of course, if they become conscious before we realize it, then that might be real bad news for a lot of Half Life 9 opponents…

well most people would still only play games for an hour or two at a time, so it wouldn’t work to raise children up to warriors (unless game time was too fast to be a playable game). so it would require reuseing people, and so death would not be death as we know it. if a sim respawns a minute later its not the same as shooting a person with a shotgun… its more like a game of paintball.

I know when I first started playing a game called Creatures, I ran across a few people online who seemed a little whacko about this already.

In the game, the way you train your creatures is by either kissing or spanking them. These people actually considered excessive spanking to be abuse, as in child abuse. Or at least their comments on message boards and game sites appeared to be sincere on this matter. I just kind of picked my jaw up off the floor and went on my merry way…

The great writer Stanislaw Lem did a story on this very topic.

A brilliant inventor meets an exiled tryrant. The inventor feels sorry for the tyrant, but realizes that the man belongs in exile. So he makes him a mechanical microkingdom filled with microsubjects. The tyrant is happy and the inventor goes home. He tells his best friend and fellow inventor about his creation. The friend is horrified and explains that with his genius and zeal for perfection, he has created not simulated people, but the real thing. The micro subjects have emotions, feel pain, truly live and can truly die. The inventor realizes that even though the microkingdom fits in a briefcase, all this is true.

Note, that the above description does not spoil the story anymore than revealing that Rosebud is a sled spoils Citizen Kane. The Cyberiad is filled with plenty of other fables on politics, technology and reality. I strongly recommend it.

It’s already happened.

In more complex sim type games – “Dungeon Keeper II,” for example – each and every individual creature in the game is represented by an autonomous little program and an autonomous little three-dee figurine, right? “Agents,” in computer parlance.

We will, for purposes of this example, take the common Goblin, from DK2.

The Goblin will appear through your Portal as soon as you have any kind of support facilities – Lairs and Hatcheries, for shelter and food. He will basically eat, sleep, and wander around doing nothing. Every so often, on Payday, he will wander into the treasure rooms and help himself to his pay.

Once you have installed a Training Room, this activates another thingy in his subroutine. He will go and Train there, beating up the little practice dummies.

If an Enemy (say, a Dwarf) wanders in, he will attack it. He will also attack enemy Keepers’ creatures, but not yours.

You don’t have to tell him to do any of these things. He’s an autonomous little unit. You can use the Floating Hand to “slap” him, if you don’t like what he’s doing, or you can pick him up and put him where you want him (into battle, or the Combat Pit, or you can save him from a fight that’s going wrong). Furthermore, if he’s outnumbered, or has had the crap beat out of him, he’ll retreat – with or without your orders. He doesn’t WANT to get killed!

Now, we have established that the Goblin is an autonomous unit, with needs and desires. We have also established that he has self-preservation instincts.

How much more complex does he have to get before it is morally wrong for me to slap him silly, toss him into the Combat Pit, or fling him into a melee with twenty angry Elves?

I suppose you’re against cockfighting, too. ::d&r::

How could it be moreally wrong to slap him aorund, or smash him to bits Wang Ka?

Atleast at the current level of technology the little gobbo is nothing more than an encapsulated algorythm, nothing much more complex or alive than the iterating function used on this forum board to collect posts from a database and send them to our browsers.

Assuming we all agree that suffering is bad, the fundamental mystery you are touching on is the difference between simulated suffering and real suffering.

As simulations grow more and more accurate, this could become a very important question. Sometimes I worry that there isn’t any important difference; that the suffering associated to an experience exists entirely in the eye of the beholder. We humans are simply programed to react to the suffering of other humans. When we see other beings as human, we will object to actions which appear to make them suffer. Some people have already crossed this threshold with current technology, as jayjay’s Creatures example indicates.

I really wish I had a copy of Cyberiad to quote from.

I agree with Kinthalis regarding Dungeon Keeper. The goblin is just a bunch of subroutines. If/then statements in the program react to variables. The goblin runs when the number of enemies is too great. It feels no fear.

I suppose this means I should stop deleting the kitchen door when my Sims light the stove on fire…

Found it!

I agree with Lem that at some threshhold the simulation becomes a reality. We’re nowhere near that point.

Some people will not reconize the sufferings of animals (or at least a few animals), do you think this AI suffering will appeal to them?

I remember, a few years ago, a story on a guy who created “tortured norns”: He would raise the creatures in an extremely abusive environment, tickling them into eating the poison plant, spanking them whenever they tried to play with the ball or eat something (but now and then tickling them towards real food). The norns were absolutely pathological–moving towards food, then away in fear, then closer as their hunger reached starvation levels, then away. They were the very picture of a hideously abused child.

This guy generated a level of hatred from the Creatures community that was unbelievable. They petitioned the company to de-licence his copy; they kicked him out of Creatures webrings; they downloaded his tortured norns and trained them back to health; they wrote long, angry screeds about his inhumanity. For his part, he said he was just interested about the parameters of the game, and only really got into it when he saw how much he was freaking out other people.

i think the assumption that the OP makes is that intelligence defines what it is to be human and gives value to human existence. i don’t necessarily think this is the case.

for one, we have no idea if plants might be super-intelligent but lack the ability to act on it, and do not show any suffering they might feel. vegetarians often prefer plants to meat because lettuce doesn’t scream when you cut it up, but cows do.

the philosophical basis for the Turing Test is this: we observe consciousness in ourselves, and we exhibit a form of behavior that we see other people exhibiting, so we attribute consciousness to them, too. if there was a robot that gave out human-like responses, but looked like a big metal box, why would someone feel the same empathy for that as they would a flesh-and-blood human? it is not human, it does not act like a human, it does not look like a human. the empathy we feel for other humans stems from their similarity in many aspects to ourselves, and i don’t think intelligence alone is enough to invoke empathy.

No, I disagree. Self-aware intelligence is what defines being “human”, by which I mean, “worthy of human rights”. Your example with plants does not contradict this. The reason we eat plants and not dolphins is that we actually have good evidence and reasoning to believe that the former are not intelligent and the latter sort of are.

The continuum is actually really easy to see. The more intelligent something apparently is, the harder it is for us to kill it for trivial reasons. Plants, bugs, fish, mice, cats, gorillas, people. And if we found evidence that, say, mice are self-aware, this hierarchy would be rearranged in a heartbeat.

i think your examples show that intelligence isn’t the only thing being considered, but the ability to provide evidence for suffering. i don’t claim that intelligence isn’t a valued human property. my point is that human-like intelligence might not place a machine on the same level as a human, if the machine does not compare to humans in other ways. if humans could be turned on and off and repaired as easily as computers, it might not be considered so immoral to kill one. if machines could convincingly show that they suffered under certain conditions, those things might be more likely to be considered immoral.

While I think it’s kind of twisted to do what this guy did, I just have immense difficulty suspending my disbelief enough to see the norns (or the Sims, for that matter) as anything other than a crafted representation of the programmed algorithms. Maybe that’s why I never did get into the direct-command types of simulation games like Creatures and the Sims. I can only enjoy telling a computer program to eat or pee so often before it gets boring for me.

On the other hand, I get into the “governor” type games like Impressions City-builders much more. It’s easier for me to get into a city-level simulation than to get into an individual-level simulation.

(Sorry for the long post… mostly just a bunch of gibberish so feel free skip to the next post if you want).

I think it would depend on the makeup of said artificial intelligence. IMHO, some questions to ask would be “Does the artificial intelligence in game XXX actually possess sentience? Does it experience emotion, like us?”

I disagree with what JasonFin said… I think that the actual presence of suffering is more important than a mere perception of it. If the “victim” does not actually suffer, then how can the act be cruel? True, we might perceive it as such, but that’s just our perception… it doesn’t mean the target of that abuse is actually experiencing any pain or suffering.

It simply isn’t real, and I think that’s important. Is it cruel when actors get murdered on screen? Or a character in a book gets tortured? It might look cruel, yes, but it’s only a portrayal and not the real thing.

Computer games are the same way. Like other people have said, the AI in today’s games aren’t sentient yet and are just a set of predetermined responses to events. They experience no feelings and are incapable of suffering. We know this because we coded every line of them.

But of course, should AI ever advance to a level where it does genuinely experience sentience and emotion, instead of merely giving off the apperance of it as they do today, I think it would be cruel to mistreat them… simply because by then, they would be all too human.

When does it become morally wrong? When they can actually suffer.

But until/unless that day comes, I think it’s all right to bash 'em all you want.

But what I want to know is… how will we be able to distinguish what actually feels emotions and what doesn’t? What if we ourselves are nothing more than machines, programmed to go through a set of physical and mental reactions in response to stimuli? I don’t know how our emotions really function, but what if they are just the biological equivalent to lines of code? “If I see big, bad object, then: send “afraid” signal to processor, which in turn makes heartrate faster, makes senses more alert, gets andrenaline pumpin’, and then makes legs run the heck outta there.” What does it mean to be self-aware? How can we test for it?

Apparent intelligence is not enough. What if a person was completely paralyzed? He wouldn’t seem very smart, or even very alive, and would not be able to respond to much of anything. But is he self-aware? I would say yes.

In this case, things like brain scans might reveal thought activity, but what about forms of life that function completely differently? Who are we to say that thought and conciousness can only occur in the brain-type objects we are familiar with?

But we can also feel empathy for non-human creatures. We feel little sympathy for ants because intellectually, they are about as complex as a “Sim”. But we would not tolerate a person pitting an army of dogs and cats or other small animals against each other for amusement.

I have to agree with jayjay. The “Creature” was simply responding to a pre-determined set of algorithms that told it “IF ABUSE THEN LevelOfAgitation ++1”. That’s basically what any other videogame simulation would do.

If the opposing armies all of a sudden refuse to obey your command and sue for their own peace independent of your interaction, then I think it might be time to be concerned. (“um…yeah I want to return my copy of Total Anihilation 3000. They keep reaching some kind of armistice and refuse to fight!”)