Would it be immoral to create virtual people who suffered?

[Bob Newhart voice]

But I was addressing the OP’s question about. About morality. Giving my opinion on it. On moral consequence. Like I. Like I explained.

[/Bob Newhart voice]

Er, too late for an edit, but in my prior post I meant to say that I deduce that quantitative differences matter. :smack:
And Bryon, no kidding here, to cook a simulation you use simulated pots, pans, and fire. You the human can’t eat it since there’s (probably) not that kind of interface into the simulation, but the cooked simulated person could still be eaten by other simulated entities in the shared simulation.

I’m prepared to let the simulated courts sort that one out.

No, I’d consider that infant to be “I don’t know”. Unlike a fetus a 4 month old infant has the functioning brain to support a person; a 4 month fetus doesn’t.

Hmm… I’m not sure I understand this – in fact, I’m utterly confused – so please correct me when I’m wrong. My question is: What underlying process enables us to “think”? When an AI sim “thinks”, presumably it’s following code of some sort, right? That code consists of algorithms – instructions – which can be reduced down to the flow of electrons through a series of electrical gates on a chip. These chips are created by programmers, who according to you are in turn governed by similar processes that are powered either by an additional simulation layer or, once the last layer is reached, by the laws of physical interactions between the neurons/atoms/subatomicthingamajigs that make up this programmer’s brain… right?

If that’s the case, do we know whether these underlying physical forces act randomly?

I guess I’m imagining a multi-layered simulation constructed of, at its lowest levels, mutually-attractive Lego bricks. When you have two Legos floating around in space, will they always have a tendency to mutually attract and form a pair? And from that pair, more pairs, and from that double-pair, more and more complex groupings that eventually give rise to what we now call “intelligence”? But this would result in, ultimately, a completely deterministic universe, right? Why would there be any uncertainty at all at any layer of the simulation?

Or would these primordial Lego bricks be governed by uncertainty, like Heisenberg in ours? If so, any eventual structure that arose would just have been a fluke – everything would be a fluke aside from the bricks’ own inexplicable existence – and at any given moment, the entire structure of the universe could collapse if one of the underlying Lego bricks randomly floats away, no?

Or would it be some combination of the two? Some Lego bricks would attract, some bricks won’t, and from there we have higher-level structures that are mostly stable until disturbed by a sufficient number of Chaos Bricks…?

:confused:

And would we ever be able to get to the bottom of this, absent a miraculous rescue from the legos of the lowest layer?

Dang. Wait…I’m going to have to rethink this!

Well, as I said, I’m not trying to debate abortion here, I’m just trying to clarify your argument. How sure or unsure do you have to be about the self-awareness of a being to decide what moral protection to give it? How self-aware does it have to be? Does a chimpanzee qualify? What about an 8-month fetus? A chimp is probably at least as much of an “I don’t know” as the 4-month old baby, and the 8-month fetus probably has the functioning brain to support a person…they certainly can and do easily survive outside the womb at that age.

I don’t necessarily disagree with you about AI, incidentally, I am just trying it determine if you use the same logic with other life forms.

A point that has been neglected so far, is how unique or impossible to replicate are these virtual consciences (VC). If a VC is in a state of suffering or happiness and rebooting the machine and replicating all the variables will lead it to the exact same state of suffering or happiness, then they are yours to torture and destroy. You can always restore them to whatever state you want.

If they are truly unique beings, in that a reset and repeat won’t bring them back, and they are conscious (a premise of the OP, so it doesn’t matter how we measure it. A Turing test will do for me), then they are equivalent to children or pets. Apply to them whatever morality you have towards them. If you are ok with torturing children and pets, then feel free to extend the courtesy to your VC. If you are not, then don’t.

Resetting is killing the old AI and birthing a new one. (Not that killing an AI is necessarily immoral under the constraints of this debate. If you tortured the poor thing first before restoring it, then it might even be a mercy killing.)

Why would they deserve special treatment just because their states could be changed and restored? What we’re learning of our own brain so far suggests that it’s chemical-electrical in nature too, and we’re beginning to learn how to manipulate certain variables to affect our emotional and physical responses. If, one day, we learn how they all work and are able to save and restore them, do we cease to become human and open ourselves up to torture?

Well, no. That’s precisely my point. If the simulation comes back to the same point it was before killing it, then it isn’t “birthing a new one” just “rebirthing the old one” and no harm, no foul. Spawn, grow, torture, kill, respawn, regrow and no torture. Torture didn’t happen. Actually, it would probably be easier, cheaper and faster to just save it instead of reliving the whole thing.

If you can destroy the tortured person and make a new one at the save point just before torture, then there was no torture.

Of course, for us, there can be no such thing. There would always be the lost time and the potential for knowing that there was a torture that, although we cannot remember, we know happened.

If the simulation can be reset completely, then there is no such dilemma for them. This would mean, though, that if we use the Matrix model, there would be a whole matrix for every individual. Not very efficient.

Why would there not be that for an AI?

This thread reminds me of the proto-Sims game Creatures (with mythical characters called “Norns” instead of humans), as well as a few mentions of “tortured Norns”, the product of computer users who deliberately perverted the punishment/reward dynamics of the game. I found a mention of one of the earliest, dubbed “Slave” by its creator:

For some reason, this just cracks me up.

So, if we master human cloning, it’s morally okay to torture someone if afterwards you make an unmolested clone of them?

The question, I think, is whether there is something special (or rather, *un-*special) about artifical intelligences that makes it okay to play GTA3. Er, I mean that makes it okay to torture them or otherwise cause them to suffer. I think that the ‘spawn, grow, torture, kill, respawn, regrow and no torture’ routine only is morally okay if 1) if it’s okay to torture someone if you stop their suffering later (by killing them, say), and 2) if there will be no shortage of people after you’re done with your torturing/killing. Since I think 2 applies already in a lot of places in our well-populated world, is it therefore okay to torture anyone so long as you kill them afterwards?

er. Good question. As long as this AI can see a clock outside the sim, then they will know they are being reset. In that case, then no can do the torture bit. If it is something like the matrix or the Sims, where they have no idea at all of the world outside, then they would have no notion of being reset.

We could do all that with a human, though. We have essentially internal clocks, but we could make AIs without them. OTOH, it might still have the sensation of powering up and down.

How about this, though; imagine we torture an old man. At the end of it, he dies, though not through our torturing (he was going to die then anyway). He won’t remember it, so no harm no foul?

whoa, my friend. You just made more logical quantum leaps than a microwave photon between two prisms 3 feet apart.

Each person is an individual conscience. Killing them is an additional wrong on top of the torture. It does not matter how many more there are out there waiting to take this guy’s place.

The point I am trying to make is that if this AI can be brought back with absolutely no conscience of the torture, it is exactly the same as if this torture had never happened.

I realize that this argument might be nearing the ridiculousness of someone coming into your room every night and replacing everything in there with identical copies. Still, if you can’t tell, it makes no difference to you.

How do you know that last night The Toy Owner didn’t pull out your head and reset the whole universe to where you are right now, as you were last night, without any conscience of it. Have you been wronged if that were the case? I do think that it might make TTO a bad person doing this, but to you, there is no difference at all.

Of course not. While he lived, for whatever length of time, he was aware of the harm done to him. It did happen to him and he was wronged by it. Yes harm, yes foul.

Unless we are perfect programmers, though, it may be difficult to account for this. Errors in programming, hardware failures, earthquakes, solar flares, etc. may all effect the simulator, and once enough errors accumulate, a few AI beings could begin to notice “something seems weird…”, like the Deja Vus in Matrix.

That question aside, you’re saying it’s okay to torture them just because they won’t remember it afterwards? In that case, does stuff like date rape drugs become okay? Done right, the victim wouldn’t remember any of it anyway.

I gotta wonder what programs the Star Trek mirror-universe guys run in their holodecks. Frankly, if the regular characters weren’t so bland and wussified (i.e. they resembled human beings), I’ve no doubt their holofantasies would gradually get more extreme, to the point where Worf’s Klingon training programs would look tame and cartoonish.