Of androids, murder and rape

OK so I’m going to make my super robot victim that is in every way a perfect immiation of a human, but I’m going to screw with it a little bit. First off I’m going to reverse the negative psychological feedback routines and make them positive so that all experiences they previously experienced negatively they now experience positively. Secondly I am going to program them to be consummate actors that can fake any emotional state, and give them additional positive feedback if they accurately simulate what a human would do in a given circumstance.

These second generation androids behave exactly the same as the original androids but now actually feel pleasure from all the pain, torture, what have you that is inflicted on them. Does the pro Turing group think that this second group of androids are necessarily identical to the first set because they behave the same way?

This was the first thought I had. The notion of the “pressure valve” as an outlet for horrible acts is not supported by evidence I am aware of.

If anything, the evidence suggests that unusually violent, antisocial behavior escalates the more it is practiced.

Well, I guess we’ll get to see if gradually improving versions of simulated rape lead to increased incidences of actual rape.

That’s pretty much been a constant fear since Wertham tried to warn us about comic books.

If that was true, then video games really would increase violence.

I think that what actually happens is that people get too fat and happy to want the challenge and risk of doing it for real. A person raping these androids will think, “I wonder what it would be like to rape a real woman? Eh, lots of punches to the balls and probably jail time where I get raped.”

I don’t really buy the “pressure valve” theory either. I suspect that being able to do simulated rape and murder doesn’t normalize the real thing OR give people an outlet. Our brains probably see the distinction quite clearly.

A rather amusing example of this occurred when I was playing a Sims game on my X Box. One of the characters started a fire while cooking. The Sims inside the house all burned to death. Right in the middle of the fire, the maid arrived, and in the midst of all the flames, she began to get busy with cleaning the place, seemingly immune to the fire.

Generally-speaking acting out fantasies makes people less likely to carry them out for real. So I have no particular issue with people playing out fantasies of immoral actions, especially since we’ll likely all have weird fetishes at the point where technology can fully accommodate them.

This all assumes we know for a fact the androids are not conscious. To me, if there’s any doubt as to whether they are sapient, it’s trivially not something we should consider allowing.

But there’s another argument in the case of, say, hyper real VR. That it will become difficult to tell when you’re actually in the real world. This is a general problem not limited to the issue of whether you let people act out violent fantasies.

Sent from my Mi-4c using Tapatalk

We’re talking about a level of simulation and experience way, way, way past a video game. We’re not talking about video games at all. We’re talking about a person committing rape and murder in a fashion that is indistinguishable from the real thing.

Maybe. But constitutionally, I don’t think we can outlaw it, so we’re going to just have to find out. Virtual child porn is protected speech, so making an android for the purpose of abusing it is also probably protected speech unless the android has personhood.

What makes you think the legalistic US definition of what is or isn’t protected speech matters a damn in a discussion about morality?

This.

Just mentioned that to point out that we’re going to find out one way or another.

Well, if we knew exactly how the simulation of consciousness was generated, then we could be sure that a particular simulant was not conscious. For instance Harry Potter is not himself conscious, but you could employ JK Rowling to write his reactions in real time to simulate consciousness. Or, since she would probably be too expensive, you could substitute a room-full of Hollywood writing hacks. The resulting entity wouldn’t be conscious, but he would be derived from the concerted effort of a bunch of other conscious beings. If you hurt or killed this written character, the writing hacks wouldn’t get hurt at all.

Now substitute the writing hacks with a massive computer capable of generating hack characters and writing situations for them; the characters themselves would not be conscious, but the hack-writer-bot might be. Would such a hack-writer-bot get offended if you kept killing its characters? Possibly - but that would be quite a different thing to killing a sentient being.

I think you’re barking up the wrong tree with this idea.

The information contained in a zygote doesn’t have to contain all the data in the final human being, either mentally or physically. If DNA has instructions for how to build one hair, and to keep building hair until some threshold is met, then with basically two instructions, you have hair on your head without the need to code in the location and details of each hair.

We’re still understanding the details of this in the body, let alone the brain, but if you look at neurological development in utero and then through childhood and adulthood, you see the brain growing and reorganizing as it goes. DNA doesn’t positive every neuron, let alone dictate the information content they store.

It looks like the best theories on memory are that neurons interconnect to match sensory inputs to a reward/avoidance/emotional response. If you link red apples and sweet taste to satisfied hunger, but green apples to sour taste and vomiting, you’ve created a lookup table of sorts that might work not only with apples, but with red/green and sweet/sour in other items. If you repeat the experience, you strengthen the connection.

We’re not so far from being able to simulate this and test it out, and it would surprise me if AI couldn’t pass the Turing test within my lifetime. I’m not entirely sure how I feel about that from a moral perspective, though… especially when you consider how flawed human intelligence actually is. I don’t even want a computer that acts entirely human. We’re kind of assholes, aren’t we?

I think many would be surprised at how many human beings may fail to pass the Turing test. I hold great esteem for Alan Turing, but I don’t think his test will convince either a judge or a scientist that an android is a person.

A myriad of fallacies lurk within the theory that a sex toy in the shape of a human being would feel raped if manufacturers mounted on it a very advanced computer whose operating system could simulate the inner life of a person. Suppose the operating system were indeed self-conscious and could ‘feel’ (both of which I doubt), why would it bother to suffer when the actual ‘victim’ of the rape were the humanoid toy, not the computer or the operating system itself? A smart operating system would ‘feel’ less than I feel when a dog urinates against the tire of my car.

That’s why I would hope that we’d design such simulations with that kind of detachment. Making something that can feel so that you can hurt it is sadistic.

No it isn’t

Emphasis added.

In general I don’t buy the idea that indulging antisocial behaviors acts as a safety valve which suppresses those urges in other situations. We already know it doesn’t work with regards to anger/aggression.

My guess is it wouldn’t work with other antisocial behaviors, such as abusing children/women.

As for the specific androids specified by the OP, I’d be for outlawing abuse of them. As far as I’m concerned a society is made up of those with compatible minds, not superstructures. A mind that can think and feel like a human is a mind that can think and feel like a human, if it’s implemented in a human biological brain or in other hardware.

Enjoy,
Steven

Bolding mine.

How is the application “smart”? Do you mean we designed it to have a detached attitude? Or do you mean that “if it knew what was good for it it’d develop a detached attitude?”

Why does the “operating system” inside a meatware person feel suffering when the body is raped? After all, the brain isn’t being harmed. How come it doesn’t have a “smart” detached attitude?

The application would be smart in the sense it were sentient. Software can be installed and uninstalled. It is detachable because it is created as such. This software’s ‘personality’ would be the same regardless of the device it would be installed on.

In the case of humans, there is no such detachable software, like a soul or something. A person’s personality is shaped by the person’s history on the one hand and his or her hormonal system and nervous system as a whole (not just the brain) on the other.

I hope I’ve been clear.

ironically the op according to the animated history of the matrix is what led to the war that pretty much wiped out normal life on earth …

the robots became smarter and more human like but the humans treated them as disposable furniture and they were abused and even killed a murder trial in which the defendant was acquitted because the robot wasn’t considered a sentient being and since he was property he didn’t have rights … the robots left formed their own county and pretty much outclassed the human society but became corrupted by hatred for humans , thus the war started …

IOW, pretty much the standard parable about what happens when one group of people treat another group of people as less than equally human.

The point of course being that what makes us human isn’t our meatness. It’s our sentience. Sentience of a similar level to our own *must *be treated equally.

The devil is in defining “similar level” particularly in the likely future case where these devices “evolve” from mentally inert to some sentience to our own level and perhaps to beyond more quickly than human society can adapt to the new realities.