Would it be ethical to create an AI capable of feeling suffering?

Jasmine,

Agreed, it would have no survival value for the computer.

True. If you look at OP’s other threads, you’ll see that this thread is a contrived attempt to have others confirm a delusion that we’re living in some kind of “I have no mouth and I must scream” universe. See, if it would be unethical to create an AI with the capability to suffer, then it was immoral for the Great JuJu to create us with the capability to suffer. Ergo we’re living in an infinite vale of tears ruled by Khorne.

This would only be a valid conclusion if all circumstances surrounding the creation of a sophisticated AI and the creation of a biological being with a soul were identical.

MichaelEmouse,

Oh, I missed the connection.

There is no possible ethical issue regarding computer intelligence. The interpretation of pain is done by the human observer. The computer is not capable of feeling pain.

It’s a machine!

I very much doubt that it’s possible to create an AI that doesn’t feel negative emotions - in the sense that a negative emotion is just an internal state change recognizing and reacting to an existing or anticipated undesired outcome. Without the ability to react to undesired outcomes your AI will never be able to pursue good outcomes. So “unhappy” AIs are basically a given, presuming you want them to do anything at all.

What you’re probably not going to see are AIs that are debilitated by their negative emotions. Your car may be quite averse to the idea of getting T-boned at intersections and thus approach them very cautiously, and might give erratically-swerving cars the same fearfully wide berth that I do, I doubt you’ll see many cars throwing tantrums and refusing to leave the garage until you apologize to them. You probably won’t see too many cars that have a panic attack on the freeway and decide that getting off the road by any method is better than staying on it, regardless of how many pedestrians they have to plow through. You probably won’t have too many cars experiencing existential despair and becoming suicidal after reading too many billboards from hellfire-threatening churches decrying them for not having souls.

Or then again, maybe you might.

In Terminator II, while John Connor was digging bullets out of the terminator, he asked him; “Does it hurt when you get shot?” The answer, “My body senses injuries. The data would be called “pain”.”

I think that was good writing because I think that’s pretty much all a machine could do in the pain department.

“What do I care for your suffering? Pain, even agony, is no more than information before the senses, data fed to the computer of the mind. The lesson is simple: you have received the information, now act on it. Take control of the input and you shall become master of the output.”

— Chairman Sheng-ji Yang, “Essays on Mind and Matter”

There’s no such thing as ethical or not ethical in a holistic, universal sense.

The argument that it would be ethical is that it’s no worse than creating a baby and we do that.

The argument that it wouldn’t be ethical is that you can’t guarantee its ability to live as it wants or end when it wants. In which case, creating a baby is unethical as well.

Ethics are, in part, driven by human instinct. If we were a species of advanced crocodiles, it wouldn’t be seen as unethical at all to abandon our young to the wilderness, as soon as they are born. Ethics are a compromise between the members of our species on the “rules of the game” for us to all interact in competition and cooperation for our instinctual goals. It’s not a question of good or bad, but rather what is “fair play” or “cheating”.

AI, unless strongly modeled on humanity, would be playing a different game. It would have different instincts baked in. Whether their needs and aims mesh with ours sufficiently for us to consider it all fair play depends on what we made them like. If we did model an AI on the human brain - say, scanning a brain into a computer and continuing it on through time in simulation - then that would be torture. It wouldn’t have the ability to live and work in the real world, despite its instinctual needs. But if we make an AI that is happy in all instances except where it fails to learn something as fast as it should, well…I mean, that’s hard to say that it’s any worse than making a kid go to school.

Ethical to ensure it can understand and ‘feel’ suffering and other emotions? Certainly…not only ethical but it’s going to make it useful and possibly ensure it doesn’t do really terrible things. To MAKE it suffer, that would be another matter. Basically, we make bio-sentient AI’s all the time, the vast majority who can feel suffering, and there is no ethical dilemma there. Deliberately making them suffer is where you start getting into un-ethical territory.

I would posit that the question is meaningless. Intelligence and subjective biological side effects like “suffering” are entirely unrelated in principle, even though they’re intricately entangled in our own biological heritage. As artificial intelligence evolves it will develop a form of sentience, albeit a form alien to our subjective understanding. None of it will have a biological legacy or biologically based survival drivers.

Are you presuming that suffering is limited to physical pain?

No, though that’s the most primal form of it. What I’m presuming is that what we call “emotions” in their purest sense are evolutionary biological adaptations for the survival of the individual or the species. Without passing judgment on their utility to our species or any other, I think it’s fair to say that they are orthogonal to the matter of intelligence, “artificial” or any other kind.

So to ask about an AI “capable of suffering” is like asking about an AI “capable of bleeding”. Sure, you could build one, but the two things have no intrinsic connection.

Actually this kind of misguided thinking even extends to misjudging humans of a particular intellectual bent. A reporter once asked Buzz Aldrin what he would have done if the LM ascent engine had failed to ignite, leaving him and Armstrong stranded forever and doomed to die on the moon. The intent, I assume, was to elicit some deep philosophical and emotional ruminations from the second human to ever walk on the moon. Aldrin replied, “I would have spent the time trying to fix the damn problem.”

But “not liking something” is directly aligned with artificial intelligence, because without preferences there can be no decisions. It’s entirely appropriate to describe a machine as “not happy” when it’s unable to carry out its function - presuming it has enough awareness of its own state to recognize that its preferences aren’t being satisfied, which I would expect of anything I consider to be an AI.

Once you accept that an AI can be unhappy about a situation, “suffering” is just a matter of degree.

I completely disagree. If you’re trying to characterize goal-seeking behavior as somehow emotional, you’re just anthropomorphizing AI, which is something that humans are extremely predisposed to do – we’ve done it ever since we were little and considered our Teddy bears to be miniature versions of ourselves. But they’re not.

To be clear, I’m fully convinced that AI will evolve into emergent sentience and then take us into realms of superhuman intelligence we cannot presently imagine. But they’re not going to do it in ways that bear any relationship to our evolutionary biological legacy.

I don’t think preference or emotion have a single solitary thing to do with biology. They’re mental states. I don’t care if the mind is silicon; if the mind operates on a system of rating its status relative to a goal that it seeks to accomplish, then I think it’s correct to call it “unhappy” if it is in a state where it is unsatisfied with the current situation, and knows it.

Consider a simple ‘program’ called “Lizard”. If Lizard is in a spot with a temperature of at least forty degrees, it doesn’t move. If Lizard is in a spot with a temperature of less than forty degrees, and if it detects a temperature nearby of a higher termperature than the spot it’s at, it moves toward the detected spot with the highest temperature. Repeat indefinitely.

I’m of the opinion it’s entirely correct to say that the program is “unsatisfied” or “unhappy” if it’s colder than forty degrees, and “satisfied” or “happy” when it’s at least forty degrees. Sure, its criteria for satisfaction is simple and knowable, and not the same as our criteria, but that’s immaterial. Under its own criteria, the terms apply.

The alternative is to say that the terms don’t apply because Lizard has a mind that works differently than ours. The problem with that is that I can’t be sure that your mind works the same as mine. And since I don’t know that, does that mean I can’t describe your mental states as happy or sad?

I think the first clue to how wrong this is is the appearance of the words “preference or emotion” together as if they were interchangeable or even remotely the same thing. “Preference” or synonyms thereof are what I mean by goal orientation. “Emotion” is a biologically induced phenomenon orthogonal to intelligence.

Your definition of “unhappy” is so overly broad that it’s entirely meaningless. We frequently speak of our computers as being “unhappy” when they’re not functioning properly. Does anyone mean that literally? Or to take your “Lizard” example. I occasionally play chess against my computer or tablet, and when I get it cornered (either because I’m playing it at a low level or because I’m using the assist of another computer – I’m not very good at chess myself) it will often say “I resign”. I then exercise the option to continue play. Do you think the computer is “unhappy”, perhaps even despondent, as it flails around in a hopeless situation? Indeed one might even be tempted to think that it is, because in those situations it may sometimes make irrational moves, a move that one might be tempted to interpret as saying something like, “here, take this piece, I don’t care any more!”. But you must surely recognize that such an interpretation is an extreme of anthropomorphizing, a laughable interpretation of what’s really going on in its algorithms and heuristics.

Again, I want to be clear lest I be misinterpreted. AI doesn’t just do “what it’s programmed to do” except in a meaninglessly narrow technical sense. AI has the power to surprise us with novel strategies and insights, to amaze us and teach us with remarkable behaviors as a result of its emergent properties. It ultimately has the power to be far more intelligent than we are. But “emotions” are totally not part of this paradigm, except in our imaginations and our natural tendencies to anthropomorphize everything as being just like us.

I’m a computer programmer. I know quite a bit about what goes on in agorithms and heuristics. And I crafted that Lizard example very carefully, to ensure that there was an actual metric that the Lizard was actually using to assess the quality of its current status.

Your chess program also presumably has a metric to assess the quality of its current status, and when that metric falls below a certain point, when it is not “happy” or “satisfied” or “>-10” or “whatever” with its status, then it changes its behavior and shows the “resign” message.

When you reject its resignation, this doesn’t change any internal metric, so it is still “unhappy”, but it is not more unhappy than it was before it resigned, because as far as that metric is concerned nothing substantive has changed.

Well, probably. If it in fact does start playing irrationally after a resignation was rejected, then there’s a possibility that upon rejection of the resignation the metric is changed - that its criteria for what is “good” is altered in such a way that it no longer inclined to resign. Heck, perhaps it’s changes such that losing is its new goal, because the designer figured that if you rejected a resignation you were going for a trouncing. I don’t know; I didn’t write the program. (If I had written the program, then I would have just set a flag preventing it from resigning again: it would still have the same metric for assessing the game state, and would still consider the game state to be far from optimal, and would still use the same algorithm for choosing its moves that it did before - except that resignation would no longer be an allowed option. And maybe that’s happening - perhaps its ‘irrational’ moves aren’t as irrational as you think - longshots, not surrender. Who knows?)

Define “emotion”.

(And then watch me reject your definition.)

Any AI with self-awareness and self-assessment capabilities will, as part of assessing its situation, develop metrics for measuring its internal state with regard to is objectives and image of an ideal world state. If you don’t want to use “emotional” terms to describe the result of those self-assessments, then what terms would you prefer to use? Why are those terms better?

Imagine I make a machine that can detect certain chemicals in the air that correlate with bacon frying.
Does it automatically follow that that machine is experiencing the smell of bacon, the way I might?

The answer is “no”.

If we made a machine that could perceive smell like a human, then it may behave the same as a human might.
But the opposite doesn’t necessarily follow: the behaviour doesn’t imply the internal perception must be the same.

I don’t think he’s characterizing goal-seeking as emotions. Rather, he’s characterizing emotions as monitors of internal states. An AI would necessarily need to monitor some of those same internal states - therefore, it is feeling those emotions.

Consider it this way. As a human, I’m aware of my own existence, and have a desire to continue it. If I’m in a state in which I perceive a threat to my continued existence - say, I’m in an airplane that suddenly loses power - I feel fear.

Now, let’s posit “aware of its own existence, and desires to continue it,” as a baseline for “true” intelligence. If a “true” AI is in a state where it perceives a threat to it’s continued existence - say, it perceives an imminent power failure in a critical subsystem - what do we call that state? Do we call it fear, even though its subjective experience of that state would not include the markers tied to human anatomy, such as a sinking sensation in the chest, or a tightening in the guts? Or is “fear” the specific response to certain stimuli as experienced by homo sapiens, and maybe some higher mammals?

It seems to me that the former is the most useful definition, particularly when you start introducing the concept of non-machine intelligences, such as aliens, or uplifted terrestrial species.

But it’s got no nose. How does it smell?