Of androids, murder and rape

The topic was prompted by watching Westworld but the scope of the debate goes far beyond a TV show, hence GD.

Let us imagine a future much like that depicted in Westworld, where humanistic androids are common and difficult to distinguish from humans. The androids are cleverly programed to display all human emotions and to act just as a human might on all occasions. Whether they can actually ‘feel’ emotions is a philosophical debate as yet undecided, although the majority view is that they do not, at least in a human sense. Some even argue that even if the androids do feel it is all for the better in making the experience that much more authentic for the customer. After all even a feeling machine is still a machine.

Given this scenario would it be good for society to permit such androids to be raped, tortured, killed? The argument would be, I presume, that such actions would serve as some sort of safety valve, allowing those with such a predilection (and unfortunately it is likely that such predatory beasts will always be with us) to satisfy their base urges on a machine, thus eliminating or at the very least greatly reducing their need to wreak havoc on real people. Pedophiles too, such would be the argument, could sate their lust on machine children.

The arguments against I imagine would be that it is totally wrong to condone rape/murder in any way and to do so would be to endorse, perhaps even encourage, violence. Even if the androids only display rather than feel human emotions violence against them is unacceptable in a civilized society.

At the time envisioned such theme parks cater only to the affluent but there is a movement that would make them accessible to all and even mandatory for those who have shown a propensity to violence.

I’m torn between the two positions here. If it turned out that the figures for rapes and crimes against children did show a significant drop once universal usage of the parks began then it’s hard to argue that’s not a good thing. (Murders I feel would be unaffected as the motivations for such crimes are quite different.) On the other hand licensing such brutality sits very ill with me as it perpetuates the gross instincts that occasion it. Caught between the devil and the deep blue sea I wonder how Dopers feel about the ethical questions posed by such a future which may not be too far distant?

Perhaps we could program the androids to seek out and kill the humans that want to do violence to them?

I’d much rather live in a society without murderers than one with murderers who get to practice on machines until they can do it without becoming too excited and goofing up their attacks.

Long before we have actual androids, I can picture video games in which one could simulate the rape/murder/abuse of a computerized character. Stuff like this already exists at a very primitive level but I expect it’ll get better, to the point where one can virtually assault something that could pass a low-level Turing test.

Then take the computerized responses and build them into a “RealDoll” with gradually improving sensors and vocabularies… I doubt we’ll be at the point where sexbots will be walking around, able to roleplay being kidnapped and such, but who the hell knows? It’ll likely remain easier to just build the fantasy into a virtual reality headset than to try to build the future equivalent of a Smash’Up Derby set for adults.

I can see the value of producing lower level models – androids that don’t model human behavior very closely, but which can be told apart from humans by nearly any competent adult – and allowing those to be treated just as horribly as anyone wants.

After all, I can buy a department store mannequin and shoot it, rape it (so to speak), set it on fire, whatever. If it’s an audio-animatronic figure, about as sophisticated as those in Disney’s Pirates of the Caribbean, that wouldn’t bother me a whole lot.

Once they reach the stage where people can hold serious debates on whether or not they have real feelings…then I’m opposed to mistreating them.

(The above is also my opinion regarding pedophilia toys – sex toys made to resemble children. So long as they’re fairly primitive, like blow-up sex toys today – I would permit them. Heck, they make blow-up sex-toys that look like sheep… Who’s to care?)

A machine advanced enough to act like a person is probably a person, even if it is not a human. It would not be possible to create such an advanced machine by accident, and having done so it would be morally wrong to declare them free game to the sick bastards of the world.

This is how I feel. Once they reach that level, I don’t even consider iut a debate. They’re sentient beings.

Folk psychology describes anger and other urges as a metaphorical pressure cooker that either explodes or can be relieved by “letting off some steam.” As far as I know, this doesn’t hold under examination and may be backwards. If subjects take their anger out in violent but safe ways, such as hitting a punching bag or a pillow, then they learn to associate violence with anger and it may only compound the problem. Better to build a civilized culture of meditation and reflection instead of ceding to counter-productive urges.

My first post was a bit of a threadshit. I apologize for unloading both barrels into your hypothetical. Even worse to be the first response. Sorry.

IMO …

There’s really two questions here: What is the morality as to the androids? And what is the morality as to the human society? I’d suggest a pretty good analogy to both questions is our current attitudes to staged dog-fighting or cock-fighting.
As to the morality vis-à-vis androids:

Certainly various animals, including canids and birds, fight one another in the wild for dominance, food, territory, etc. While they rarely directly kill one another, any serious wound in the wild amounts to a slow death sentence from infection, untreated broken bones, etc.

Most folks think animals lack moral agency. Yes, animals at that level of intelligence feel pain. But no, they don’t *understand *pain nor have any concept of the rightness or wrongness of inflicting it. In the context of a staged dog-fight, when dog A injures dog B, we ascribe blame or responsibility for the injury not to dog A, but to dog A’s trainer/owner. And we label that wrong. Because the human is de facto harming the dog B. The fact a non-sentient dog is harmed does have moral content. And not in a good way.

Arguably, an android able to give a quality simulation of human behavior is more, not less, likely to have at least dog-like levels of awareness. In many way dogs are alien life forms to humans. What makes them tick is not what makes us tick. An android able to mimic a human would, despite the different tech under the skin, in many ways be *less *alien than is a dog.
As to the morality vis-à-vis human society:

What do we think of dog-fight sponsors and trainers? Heck, what do we think of dog-fight spectators? At least in this era, we think they’re immoral scum. Folks who lower the moral tone of all they touch. In a word, they’re deplorable.

Which do we find better, a society full of such people, or a society wholly lacking in such people? There’s no contest. There’s not much thought that being a dog-fight person contains within it some countervailing good side or beneficial moral side effect.

Folks can argue that various religions are mixed bags, containing some moral good (e.g. concern for right behavior & the welfare of ones’ fellow humans) mixed with some moral shortcomings (e.g. emphasis on Us vs. Them). Nobody can straight-facedly make the argument that dog-fighting contains elements of virtue mixed with the vice.
My bottom line: Westworld is unambiguously bad for society and unambiguously bad for the androids. I’ll state, without defense, that something unambiguously bad at the two ends of the telescope is also unambiguously bad for the middle: the human(s) who would desire to partake of the badness.

I think the OP is lacking in several ways. But I’ll just comment on the amateur psychology assumption that murdering or raping androids would serve as a pressure valve. It could just as easily function as normalizing the desire to rape and murder. After all we’re not talking about games here, we’re talking about a situation where, according to the OP, “Some even argue that even if the androids do feel it is all for the better in making the experience that much more authentic for the customer.”

If it was expected of rich assholes that they’d enjoy killing and raping the realest human simulacra around, there’d be that many more who just couldn’t resist the temptation to see if it was just as good with a real human being.

I used to think that, but the question is probably more complex. The ‘mind’ of the android might be a huge database of responses, copied from real human responses, which are expressed in appropriate situations according to a strict set of rules. In a suitably advanced android, the number of suitable responses could be so high that you would never see them repeated in an average human lifetime (how many humans can say that they never repeat themselves?).

Another way to do it is to connect the android to a remote computer, programmed to ‘author’ situations that might occur in real life; using data drawn from literature perhaps, or from social and psychological observations, this ‘author-bot’ could invent a character for the android which was as believable as a character from fiction - Harry Potter, say, or Elisabeth Bennett. It may be the case that such an ‘author-bot’ would need to be independently sentient - but the android would not be, and you could kill it as many times as you like without harming the author-bot significantly.

If there is any ethical problem with killing androids it is that such a method of entertainment encourages the worst sort of human behaviour. Customers of Westworld might start carrying their behaviour over into the real world; possibly by killing each other, or possibly by dressing up as cowboys and prostitutes and acting like them. Neither should be encouraged.

If you think you could simulate a fairly realistic human interaction with just a big lookup table and a couple of random numbers, then how do you know that’s now how real human beings think?

Let’s take as stipulated that human beings have real feelings, feel real pain, and that it would really be morally wrong to inflict real pain and real suffering on them. Otherwise, we might as well just say that rather than torturing androids you could ethically torture humans for fun.

But if you can’t tell the difference between a human and an android, how in the world can you assert that there’s an ethical difference between them? Because you have a pretty good idea why the android acts the way it does–you designed it after all–and you don’t know why the human acts the way it does? Does that mean that once we learn enough about human consciousness to really understand how the human brain works, then we’re allowed to treat human beings as meat robots? We understand the human brain and it’s all stimulus-response, stimulus-response, so now human beings have lost their claim to ethical significance.

Or maybe you think that could never happen, because there’s some magic happening in the human mind? Then it seems to me that’s the same thing as asserting that you could never have a realistic interaction with an android, because the android will always lack that mysterious pixie dust. If you don’t need the pixie dust to act like a human being, then pixie dust can’t be the reason human beings act like human beings.

Of course, my position is that there’s no pixie dust. And so if you have an android that can fake acting like a human being, then that android is ethically equivalent to a human being, because if human consciousness can be faked then human beings are probably just faking it as well. Or rather, that there’s no meaningful distinction between fake consciousness and real consciousness.

Dupe of above

We’ll be able to figure out the difference between a look-up table and a human consciousness when (and if) we figure out how a human mind generates consciousness. We know the human mind isn’t a look-up table because there isn’t enough data tranferred into the zygote from the parents to include such a vast number of responses; therefore real human responses must be generated spontaneously in some way we don’t understand.

Creating a comprehensive lookup table is a completely different process to creating a human mind through experience and enculturation; the Turing Test is almost certainly not sufficient to tell the difference between a very good fake and the real thing. That doesn’t mean that some other, more rigorous test would not be sufficient.

It seems pretty obvious that the human mind isn’t a giant list of if-then statements. But so what? The point is, just because an android doesn’t create conscious thought in the exact same way that a human might, that doesn’t prove that an android isn’t conscious.

And I dispute that there’s a difference between a really good fake and the real thing. If it’s so good you can’t tell the difference, is there a difference? What is that difference?

My point is that our human brains don’t actually work the way we think they work. Our unitary consciousness is an illusion. It’s turtles all the way down. Human consciousness is a really good fake–which is the same thing as being real.

Now, one good reason for this would be to study mental illness by simulating it.

(This is one of my pet theories – not seriously propounded – in the “Life is a Sim” threads: maybe we’re sims, run to study certain kinds of aberrant behavior.)

If you had a computer program that could fully emulate a human mind, then much could be learned about depression, bipolar behavior, schizophrenia, and other ailments by tweaking the program. It could lead to treatments and cures. But, meanwhile, it would be inflicting misery upon an actual mind. The morality gets ugly!

Well, I daresay Realdolls will get more realistic, because there’s a market in it, but what’s the market in taking an AI and adding sexuality and/or simulated emotional responses to abuse?

I don’t want to allow people to murder and rape even non-sentient robots if those robots are at least capable of convincingly emulating pain or fear, and it’s got jack to do with the robots*.

It’s got to do with not encouraging the kind of evil fucks who would do that kind of thing as soon as given a plausible cover.

  • although, if they are sentient - like the POV/“activated” ones in Westworld are IMO - that will factor in

Some people get off on it?

Sadly, there has already been at least one such game.

A lot of questions about artificial life are hard to answer. This one isn’t, IMO.

I can program my computer to say “OUCH!” every time I press the Space bar. It is not feeling pain. In order for something to feel pain, you have to program it to sense pain somehow. If you just program a machine to make the right screams and pleading or whatever to whatever you’re doing to it, it’s not feeling pain, it’s just following it’s instructions. Knife goes here(input), you go “Argh!”(output).

Now if an android is programmed to detect pain in any sense, then it becomes wrong. But as long as it’s strictly an input/output situation that the machine has no opinion on, there’s no moral quandary. And yes, you can already rape and murder computer characters. Programming them to SEEM more convincing won’t make them alive. For that, you actually have to do a lot more work under the hood. Imagine your game has a fire in the middle of the street. Do NPCs stay away from the fire because they are simply programmed to avoid it(if they walk right into it they are CERTAINLY not persons), or do they avoid it because it hurts and they don’t want to hurt?