Would it be immoral to create virtual people who suffered?

A standard that leads straight towards enslavement or genocide against artificial intelligences, enhanced animals, the genetically engineered, and aliens. I’ve heard it used to justify or even demand those things, and that’s without any available for us to exploit or kill.

It also makes the standard Frankenstein scenario of us being attacked by our own creations more likely ( because that standard eliminates any non-violent solution ), and also makes it justified.

No, you don’t. Power does not let you define morality.

No, that simply means you’ve become evil, and that the “sims” should try to break free from your control, and make a point of killing you if they can.

That’s simply not true. Simulations often develop on their own, in ways their creator never thought of; that’s often the whole point of making them.

No, you are simply indulging in the old and bizarre practice of pretending that something is of value only if we don’t understand it. Just because we know how their minds work doesn’t make them lesser beings than ourselves. If a simulation is as complex as a human mind, there’s no reason it can’t be as aware as a human mind. If it’s a simulation of a human mind, there’s no reason to believe that it won’t BE a human mind. Just on different hardware.

Only if you are a monster. Do you have the right to torture your children to death ?

Can you identify with the simulation? Can you think to yourself, “What if I were in that entity’s place?”

Do you value the existence of different ways of being in the world?

Can you concieve of entering into interactions involving exchange of information or other goods with this entity?

Does the entity have interests? Can the entity flourish, fail to flourish?

If the answer to any one of these questions is “yes” then the entity deserves some degree of moral consideration. (Where “moral consideration” means “taking others’ interests directly into account when deciding on our own actions.”)

-FrL-

I think Ray Bradbury (probably among others) wrote a story once in which people would pay money to simulate people they didn’t like, and then treat the simulations in various unfortunate ways… To be honest, I can see the temptation…

Then what does? Majority opinion? Religion?

“Evil” according to whom? According to the sims, maybe, but only if you programmed them that way.

Hmm… that’s definitely true, but that’s even more complicated than a simulation that we could perfectly control (which, presumably, we could since we created it line by line).

But let’s say there IS uncertainty or uncertainty in that sim. Everything we know of computer programs/simulations to date suggests that we cannot deliberately create “free will”, whatever that means. We can introduce the element of chance into programs, either deliberately (random numbers, or in the case of a simulated world, random mutations) or accidentally (just a bug that we didn’t find). But in neither situation would the Sim actually be capable of choosing for itself; it would either have to follow the decision tree that you wrote, or it would have to follow the inevitable bug that you neglected to account for. In that case, they’re only happy or unhappy because of random events that you decided to introduce into their world (or were too incompetent to debug beforehand). You’re still at fault for putting them there in the first place, unless of course you yourself are the result of an imperfect simulation…

I see what you’re getting at, but take it a step further: If we not only understand it but understand it with complete omniscience – the kind we’d need to have in order to code it in first place – then we should be able to predict with absolute certainty the lives of every Sim in that world. In that case, morality would cease to have any meaning because there would be no free will. Every Sim’s actions would be the result of an intended algorithm. The exception to that, like before, would be element of chance or a programming bug. But in neither case would the Sim be responsible for its own actions – it has to either follow the decision tree that you wrote, the result of a random number generator, or a bug that you failed to account for. It’s still a mindless slave the same way Word is, the same way humans may or may not be. It’s a slave to something – either destiny or chance. It has no free will, and thus it only has an apparent, not actual, morality and calling it good or evil more than any other computer program is just… pointless.

Do you have any rights? Says who?

Treating others as you would prefer to be treated is a good rule of thumb. Not religion.

According to me, for one. And according to any number of slaves and serfs and victims who rose up against their masters. You’ll find few pro-slavery people who are the slaves in question.

First, that’s not true. Even something as relatively simple as a computer game is unpredictable to it’s creators. Otherwise there’d be no such things as bugs. And second, how do you know we made it line by line ? There are other possibilities; everything from a self created simulation using some digitized version of evolution, to a simulation based on studies of the human brain. And once it’s created it will build on whatever complexity it has.

It doesn’t mean anything; that’s why we can’t make it. We can’t even define it. It’s simply a noble sounding phrase our ancestors made up. The language of free will is a means of getting around the fact that we have almost no self awareness, almost no idea how we decide to do anything.

It would have just as much “free will” as we have; none. I see no evidence that our choices are anything other than a combination of our “programming” and chance.

< shrug > The same as with children. Is having children immoral ? And at least you could eliminate all sorts of problems for the sims that you can’t for children.

Just like us then. We just don’t know the details of our own decisionmaking process. I don’t see why the lack of free will excludes morality. And I think you grossly overestimate the level of understanding necessary to either make such a thing, or that is even possible to humans.

As far as I’m concerned, you are arguing against the existance of morality, not that a simulation can be the subject of morality.

We do. Rights and morality are concepts defined by people for our own benefit.

Okay, if we are subject to the same rules and uncertainties and limitations our sims are, everything you said makes sense. I think we’re actually saying much of the same things :slight_smile:

The only difference is that I had envisioned a bug-free simulation with us as the omniscient programmers (probably because I just came from another God thread), but if you’d rather think of us both as products of imperfect evolutions, yes, then it becomes a question of the existence, source, and universality of morality at large because the difference between us and the sims would be moot.

But if we have no free will, just as the sims you propose have no free will, we cannot define morality because we don’t have control over our actions – everything we do is either predefined or random, and any morality we apparently “define” is actually predefined or random. In such a world, morality wouldn’t cease to exist, but it would be reduced to mere luck – “X is good because the random number generator came up with this number, which led to events A, B, & C and Y is bad because it came up with this other number, which led to events D, E, & F.” Why bother calling a dice roll either good or evil?

Morality would still exist as a phenomenon, but it wouldn’t really be something either we or the sims should fret over since it’s way out of our control anyway… but then again, we wouldn’t be able to stop worrying about it since we can’t control our lives to begin with.

Meh :eek:

Not luck, the result of reasoning, experimentation, experience, evolution, and such.

But the ultimate basis of all of these phenomena would still be prior programming and randomness. You can’t reason, you can’t experiment, you can’t evolve… you can’t do ANYTHING aside from follow instructions and chance. Morals aren’t up to you. Nothing is up to you. You can only “experience” thought as a symptom of randomness; you can’t actually “think”.

If we follow this line of thought, we end up with potentially infinite layers of simulations, and no layer except the very last one can be held responsible for their actions. Hopefully, the last layer will be a world WITHOUT chance and with complete free will – i.e., God – because then at least there’s some chance of having an ultimately stable morality, however distant and convoluted behind the layers. If the uncertainty never ends, then really, uncertainty is the only universal truth. Morality, reasoning, experimentation, life… they’d all just be incredibly convoluted random arrangements of whatever the base unit of the universe is (a particle? a string? does it matter?)… mathematically, we’re just 28242834856 arguing over 910255862266. Isn’t that luck?

Wait, maybe there’s another way out of this. Maybe the physical world beneath all the layers isn’t completely random. If so, that little bit of certainty could be at least part of the basis of everything above it. But it’d still be really hard to derive anything useful out of anything so low-level… and we may never be able to distinguish the random from the pseudorandom anyway :frowning:

No. If I were living in ancient Rome, I would, but the law of the state where and when I live says that other human beings, including my children, should I have them, are protected against being tortured to death. Now, whether there should be such laws is another debate. (Whether it’s moral to torture ones children to death is yet a third debate.)

However, these hypothetical sims aren’t human beings. They’re more like your aforementioned “artificial intelligences, enhanced animals, the genetically engineered, and aliens”., and probably, of that list, the most like artificial intelligences.

Like not aborting it?

I think an argument could be made that a fetus/preborn baby is not “conscious.” However, I will say that if I didn’t know better, I would swear Der Trihs was making an argument against abortion. Weird.

Sure, just try that with your child.

How can you compare human consciousness to an AI consciousness? Part of what constitutes suffering for us (humans) is that we live in physical bodies with finite lifespans that can be irreparable damaged. An AI can be copied, saved and restored. It can have their thoughts edited. They can have parts relaced indefinitely. One simply cannot inflict pain on an AI in the same way you can a human.

Even if the AI is self aware, when I hurt a Sim in The Sims X (or Spore II) am I really inflicting pain on a conscious being or am I simply beating on some avatar who the AI is controlling in such a way to appear to be human? In other words, if I’m playing a war videogame, I’m not huring actual entities. The virtual intelligence, no matter how smart it is, is simply behaving like a movie director to create a realistic scene.

You’re thinking far too “today”. Think Star Trek.
What happens if medical technology progresses to the point where humans have an unlimited lifespan - will that mean that people can’t suffer?

Like I said, that’s illegal. There aren’t any laws protecting computer simulations.

Why is it illegal?

Well if you are going to Hell for that, I wonder what punishment God will get for sending you there! :smiley:
I wonder if God can go to Hell? :confused:

I can’t say for sure, but my best guess is that one of the reasons is because this society believes that human beings have certain basic rights merely by virtue of being human, and as such, human beings, regardless of age, have legal protection against harm. I’m sure there are also practical reasons.

But it’s also illegal to torture animals, and they’re not human. so why is THAT illegal?
It must have something to do with morals…
So, is it MORAL to torture a conscious being you have created?

Wrong. First, thought is hardly random; it’s highly ordered. And second, the programming inside our heads is thought, is reason.

No. God doesn’t get to define our morality, real or not. And he doesn’t get free will, because it’s a nonsensical concept. And a world without chance isn’t any more free than a world filled with it.

What makes you think certainty has anything to do with meaningfulness ? Something has meaning because we give it meaning.

And ? Why does that make them less than us, undeserving of moral consideration ?

And the ancient Romans were scum who acted like scum; conquerers and enslavers. I’d hardly hold them up as a role model.

No, more like an argument for abortion being completely casual. A mindless fetus is just an object, no more deserving of consideration than a potted plant. One of these sims would be a person, just as deserving of rights and respect as any of us.

Pain is a particular program in our minds, not some sort of “pain juice” that an AI can’t have. In fact, phantom limb pain is proof of concept; someone suffering phantom limb pain is in essence feeling pain from a part of his body that his brain is simulating. If his whole body was simulated there’s no reason to believe it couldn’t hurt.

That would depend on how the avatar is connected to the AI. If it’s connected the way our bodies are to our brains there’s no reason to believe it wouldn’t feel pain. After all, if we were connected to such an avatar the way we are connected to our bodies, we’d feel pain.

This thread is about what’s moral, and involves simulations well beyond what we can make now; not what’s legal, and can be made today.