Would it be immoral to create virtual people who suffered?

Well, in my state of Virginia, for instance, under the cruelty to animal statutes:

“Animal” means any nonhuman vertebrate species except fish."
And while, New York State, for instance, says:

“A person who overdrives, overloads, tortures or cruelly beats or unjustifiably injures, maims, mutilates or kills any animal, whether wild or tame… is guilty of a misdemeanor”

The police still aren’t arresting people for killing flies.

I’m not so sure that the cruelty to animals laws have a lot to do with morality, actually. I think it has more to do with aesthetics. The majority of people fnd the idea of animals in pain to be unpleasant.

And, as to your question, again, as to if it’s MORAL to torture a conscious being you created, in this case, I don’t think it’s immoral. I think it’s amoral…it’s neither morally good or morally bad.

And if that sim managed to, say, hijack control of a robot so it could act in the physical world, and it grabbed you and tortured you, would that be “neither morally good or morally bad” ?

Or is it only other, “lesser” beings for whom torture is neither moral nor immoral ?

Why does that make them equal to us or deserving of moral consideration? I know, these hypothetical beings are conscious, they can feel pain, they can suffer. I understand all that. Why does that give me any moral obligations in regards to dealing with them?

Well… so? When we have some available, we’ll adapt law and ethics to match. No point debating what we might do with something we don’t have yet.

Personally, I’m looking forward to an era of perfectly-simulated sadism. It may prove an effective outlet for diagnosed sociopaths and keep them from taking out their urges on other humans.

I certainly would consider it practically bad for me, and I wouldn’t like it. I don’t think it could be considered “immoral”, though.

Whether or not there is any real person at one end of this thought experiment, it cannot be that there isn’t a real person at the other end.

Torturing virtual people is evil. Becoming a sadist, by means of artificial tools is evil. Nurturing ones own desire for harming others is evil. The harm you do to your own being is evil.

Tris

The fact that they have minds equal to ours. The same thing, the only thing that gives us moral obligations to each other.

Since that can affect how we behave when we do have them, yes there’s a point. And aliens could show up tomorrow.

Then why not save time, declare, say, black people to be non-human and hand them over to the sadists ? There’d be no moral difference.

No intention at all of turning this into an abortion debate, but these statements of yours could easily be used in an argument against abortion (as they are, or modified slightly), or in defense of a fetus instead of an artificially-created lifeform:

Only if, as the anti-abortionists do, you consider people to only be so much meat.

It’s clear that you and I disagree on what gives us moral obligations to each other.

Oh, sure, people, babies, fetuses, it’s all just meat. Naturally, fetuses are of much less moral value than artifical intelligence.

When the aliens show up, I have no doubt they’ll be so unlike the images we’ve created that any contingency plan beyond the very general will be instantly obsolete.

Sure, why the heck not? Jews, too. And people named “Bryan” because that “y” is sooooo pretentious.

Most writers of fiction create virtual people who suffer.

The way I see it, moral consequences befall not the objects but the subjects. If we are cruel to anyone or anything, it comes from a cruelty that is within us. So it makes no difference what the object of our cruelty is.

Moral consequences do befall the sadist, but physical consequences befall the victim. Since a “virtual person” has no physical form, those consequences are zero. Since the legal structure we have in place doesn’t punish for moral consequences (if it did, then merely having a violent fantasy would be a crime, by that standard), there is no mechanism for punishment, nor should there be.

I’m fine with you saying it’s immoral. Heck, you could call it immoral, unethical, raising the flag, shimmying the chimney, yellowing the meter, mailing the knob, narfling the garbon, or any other label you can think of. Just don’t intrude on my fun when I’ve got my virtual victim on the virtual rack and I won’t disrupt your physically harmless hobbies in return.

Well, what do you think does so ?

If the AI has self awareness, or even just emotions and sensations, then yes, it’s far above a fetus. The anti-abortion movement reduces people to meat by it’s nature; it can’t admit that such things as intelligence, awareness, feelings, and so forth matter or it undercuts itself. It has to act as if it’s only the meat that matters. It’s one of the things that makes the anti-abortion movement so evil.

Imaginary, not simulated to the point of being aware and feeling. It’s not the same thing at all.

Of course it does. Some things think and suffer, some things don’t. If you are angry and vent your emotion by chopping firewood, that’s not the same as venting your anger by chopping up your wife.

Where that point lies, assuming it even exists, is pure conjecture. As with the aliens, we’d have to see one before forming anything but a purely hypothetical policy.

I have a 4-month-old, and I don’t see any indication that he is more self-aware than my cat is. Don’t get me wrong, I’m not equating him morally with my cat. But with your logic, you would consider an infant to be “meat,” and AI to be “life.” :dubious:

Is there a fundamental, moral difference between a simulated entity who has been told it is suffering by an internal state value, and a real person who is suffering? You don’t have to go into the future to find a simulated entity that tracks and reacts to “damage” or “unhappiness”; the Sims game and various romance simulation games will suffice already. So, is there a qualitative difference between a sim who puts up a little frowny face speech bubble and a highly-realistic simulation of a person that is screaming, crying, and begging you for mercy? Or merely a quantitative one?

For myself I think the difference is quantitative. And, as one who doesn’t use a God as an excuse to pretend I’m better than other things, I think that the difference between a living person and the realistic simulation (and thus the Sim as well) is also only quantitative, not qualitative.

From which I deduce that qualitative differences matter.

How can you compare your pain to a simulations, or even another person’s for that matter? We don’t have Painometer meters stuck to our foreheads to get an objective measure from. As it happens we don’t use an objective measure at all; we judge based on obeservation of their behavior and expressions and so forth to make a subjective guess. However, we also tend to judge the action being taken against them as well; when a child screams bloody murder when you refuse to buy them a lollypop you aren’t, well, at least I’m not impressed.

What does this mean? It means that it’s the simulation’s job (or the creator of the simulation) to convince us it’s hurting. Basically the simulation will be trying to manipulate our emotions to try and alter our behavior, to convince us to do something to improve its happiness. If it can convince us that it’s really suffering, then we’ll maybe feel moral about it. But not until then.

Well, how do you expect to stew, roast, bake, or boil an A.I., let alone put it in a fricassee or ragout?