This is a pure thought experiment, but let’s say that at some point in the future scientists are able to create AIs capable of feeling all the negative states that human beings can: pain, suffering, fear, sadness, deprivation, frustration, anxiety, loneliness, sadness, grief, discomfort, hunger, sickness, fatigue, worry, despair, to name but a few. It is ‘sentient’, for all intents and purposes. Could creating this AI be ethical under any circumstances?
Yes, if the AI caused suffering in other biological life forms it could suffer itself as a way to know that behavior is bad. A form of mirror neurons to create empathy would be a good idea for a superintelligent AI. Of course it would probably figure out how to shut off its empathy if it truly was superintelligent.
Plus all human behavior leads to suffering in sentient life in one way or another on a long enough timeline.
But still.
It might be that having an ability - any ability - is inherently useful. Maybe in ways that we don’t fully understand.
Why not? People create other beings capable of feeling suffering every day and a lot of them spend quite a bit of time and effort making those beings feel as bad as possible. I’ll worry about the ethics of creating an AI with full range of emotional capability after we’ve figured out how to keep people from torturing and damaging kids.
100 PRINT “IS THIS OUTSIDE NORMAL PARAMETERS?”
110 IF “NO” RETURN
120 IF “YES” GOSUB SADNESS
There, I did it in BASIC, FFS.
Or, if you prefer,
200 PRINT “WHY DO YOU SAY THAT?”; GOSUB ROGERIAN_RESPONSE
ELIZA did it before you were born, probably.
It is not hard to think of why empathy is inherently useful in a social species and there are advantages to being a social species.
But if it truly were emphatic, it wouldn’t want to.
I sense injuries. The data could be called “pain” - T-800
Pain is a mere information, one datastream among many, no different from joy, feeling full after a good meal or having to pee right nooooow. It’d be useful for an AI out in the world to know when its components are damaged, so “pain” is in. Sickness, hunger (for what ?) etc are all also informations that something outside the AI is needed, missing or overflowing.
Some would opine making kids is unethical, though
How would we know if an AI device was experiencing suffering? Because it output the response we programmed it to? We are very good at creating simulacrums, then believing they are life itself. Artificial intelligence is a long way from artificial emotion.
It’d be pretty useless to create an AI that didn’t have any motivations. And that’s what suffering is: A motivation to avoid something. Joy is just the other side of the coin: A motivation to do something. All of the undefinable qualia we associate with either emotion are just the way those motivations manifest themselves.
I suppose that you could, in principle, create an AI with only positive motivations, not negative, but I strongly suspect that such an AI would go insane very quickly, and creating an AI that would immediately go insane would be much more ethically dubious.
I work on the very edge of this (AI) and see no ethical problems with it. They “feel” what we program them to feel. Disclaimer: I’m just a beginner and am using the previous work of others (who are a lot smarter than me). I’m programming behavior elements for autonomous UAVs. Currently these only exist as digital images in a large dome*, where I watch them interact.
I’ve modeled some human elements in the code such as aggression, hostility etc. Obviously these entities don’t really “feel” any of these emotions but it is interesting to see how they fare when I scale the various elements up or down. I often pit 2 or more against one another in the simulations and track to see which one’s get killed first. As you might expect, when I scale aggression too high, the angry one gets taken out early in the battles. While watching this, one of my co-workers said: “Those whom the gods would destroy, they first make angry – and you’re their god.” Scaling aggression too low will cause them to lose also.
I can also enter the dome and battle with them by manning the controls as another plane in the space (at the seat in the pic). It’s kind of weird when they detect me and turn to attack. I joke with the other engineers about how they have grown to hate their creator. They’re getting good enough now that I frequently lose.
Once, as an experiment, I modeled one as mentally unstable (for lack of a better term). I inserted another aircraft into the simulation that only he could see. He would turn suddenly and chase the “phantom”, and the unpredictability gave him some advantages. It was surprising how well he did in the multi-plane battles.
I have a really fun job.
*Not exact, but a decent representation.
I don’t think it’s quite as simple as some of the responses here seem to suggest.
Right now we understand very little about what suffering actually is. We don’t know how the brain generates such sensations (or even if “generate” is an appropriate verb). We don’t know how to quantify it. We don’t know if suffering is necessary, and to what degree, in a sentiment mind that is to enjoy pleasurable or positive sensations and emotions.
We don’t know if the maximum a human can suffer is close to the maximum a hypothetical being could suffer.
So if it was in my power to make an AI capable of suffering, I would definitely pause. Because, for all I know, I’d be giving humans the ability some day, to commit a far worse atrocity than the world’s worst genocide, with the simple click of a mouse.
If an actor in a film delivered those lines — word-for-word — I’d walk out and ask for my money back, complaining that I despise poorly-scripted foreshadowing that’s clunky and obvious. Like, I know they need to work in that exposition somehow; but the idea that a character would flatly explain all of it is just glaringly implausible.
And now, here I am in reality, with a dumb look on my face.
And all I was thinking was that I want pullin’s job.
Pullin,
Thanks for the information.
Are you using genetic programming techniques to allow the individual software systems to develop independently?
Crane
I am told that the brain interprets pain but does not experience it. So any emotions are simply definitions stored in our brains as lines of code as presented by Dropzone.
We have all experienced this in our own brains. When you see an injury to an extremity (like a burn) you have time to think “that’s going to hurt” and then it does. The burn hits a nerve; the brain interprets it as pain; the brain tells the consciousness that it’s in pain. The burn is real but the pain is just a definition.
A computer is the top level interpreter. It doesn’t feel anything.
Crane
I’m not exactly sure I get your meaning, but I’d say a brain both interprets and experiences.
If we experience anything, we experience physical pain. We don’t just *think *we’re in pain, we’re in pain.
But it’s true that the brain basically “decides” to be in pain. And that decision is based on multiple factors, not just the input of nociceptors.
But that makes it the same as all of perception: of colors, of smells, of sounds.
And all of it is equally hard to fathom right now.
How can we tell that an android that looks as though it’s in pain is actually experiencing an internal state anything like what a human experiences when in pain? What’s the difference? At this stage, fuck knows.
Mijin.
Good point - we cannot know the nature of pain a computer may experience.
However, for humans, it is just a signal sent from the brain to the consciousness. There are intelligent people who do not experience pain.
It’s all in your mind.
I think it would be because it would serve no compelling purpose. It could also be very dangerous because those kinds of emotions produce a “secondary emotion”, ANGER. Anger often leads to violence. So, I guess I can add “dangerous” to “immoral”.