Would it be ethical to create an AI capable of feeling suffering?

I’ll readily concede the argument if you can provide a formal description of the heuristic methods by which one AI system will become sexually attracted to another, and wish to reproduce with it (one hopes quite properly only after a suitable period of courtship)! :smiley:

Or perhaps the attempt and the futility of trying to provide such a description will give you some insights into the points I’ve been trying to make.

As a bit of an aside, I still have a running version of Eliza, by virtue of having a cool emulator for the PDP-10 timesharing system that it originally ran on. It was written as the first serious attempt to build an AI that could pass the Turing test, but it’s really just a primitive randomized pattern-matching program in the guise of a non-directive psychotherapist. It’s really no more an AI than one of the first programs I ever wrote as a kid (which was back in the days of dinosaurs and second-generation mainframes, which were about the size of dinosaurs) which generated computer poetry. Weizenbaum’s Eliza wasn’t a therapist and my program wasn’t a poet. Richard Greenblatt’s MacHack chess program and Terry Winograd’s block-stacking program were among the first real pioneering landmarks of the early years of AI.

There are more definitions for the word “love” than “sexual love”, obviously.

Try again.

So what? You clearly stated, right in that same paragraph in post #57 (emphasis mine): “I think that those emotions -or rather, mental states that correspond to them in every significant way- are inherent outcomes of being a self-aware intelligence. People don’t like things because biology forces us to, we like things because that’s an intrinsic part of us assessing our options and determining the best one.

So I offered you a challenge on that basis. You can’t go around continually disclaiming exceptions to things you’ve said when you’re confronted with insurmountable obstacles. Since you can’t support your statement, clearly some significant parts of it must be wrong.

This has gone unnoticed so far, but let it be known that I understood that reference and appreciated it.

Please don’t go, Lemur. The drones need you. They look up to you.

I think it is pretty clear that while AIs might have some emotions and instincts analogous to the instincts and emotions of biological organisms, they are also going to be very different. And the reason is simply that AIs won’t be bags of meat flopping around on Planet Earth, eating other organisms, breathing oxygen, pissing, shitting, and fucking.

We can see this sort of thing with animals like dogs that in some ways react almost exactly like humans, and in other ways are completely different. There are plenty of species out there for instance that have absolutely no parental instinct. They lay a bunch of eggs in a hole and walk away. There are other organisms that have no mating instinct, because for them mating just means releasing a cloud of gametes into the water column. A fish that swims around might feel an emotion comparable to “fear” that causes it to avoid predators, but does a barnacle feel fear? And so on and so on.

So would an AI feel something like the emotion we hu-mons call “love”? It depends. What in the AI’s evolutionary history would lead it to have such an emotion or instinct or tropism? Do AIs that feel love survive and reproduce better than those that don’t? I mean, it’s clear that the concepts of “survive” and “reproduce” could apply analogously to AIs if we evolve them rather than design them. But their evolutionary history is going to be very different than the bags of meat we’re used to, and so their emotional states are going to be very different as well. In some cases we’ll be able to see and understand those states as analogous to human or mammal emotions, in others they’re going to be more alien than any creature from Alpha Centauri could ever be. If the creature from Alpha Centauri is a bag of meat that flops around on Alpha Centauri it’s going to have a lot in common with bags of meat that flop around on Earth, just due to the laws of physics. An AI won’t have the same design constraints.

As you note, I said that “those emotions” correspond. Love was one of the emotions I mentioned. You misinterpreted the word. I corrected the misinterpretation. Feel free to try challenging my position again.

I’d say it’s pretty clear by now that the only hook you have to hang your arguments on is the biological one - humans have biology, and AIs don’t. Thus any emotion that is solidly based in biology, like say the very very very specific type of satisfaction one gets from letting out a loud belch, will be unknown by AIs that can’t experience that particular specific sensation of the throat flesh vibrating while hearing the sound roar out.

You of course extrapolate wildly from this to presume that all emotions are completely inaccessible to AIs. It would be interesting to see you actually support this assumption - especially since I don’t think you can.

I see that the dead horse continues to be flogged mercilessly to no particular avail. I have supported that assumption – repeatedly – and you just choose to ignore it and counter it with some simplistic assertion that any kind of observed automaton behavior at all can be deemed to be emotional. You’ve even gone out of your way to offer ridiculously simplistic examples of such, like your “lizard” example.

I’ve supported that assumption by giving you many examples of emotions or instincts that are obviously rooted in biologically evolved physiology: pain-avoidance, fear, the fight-or-flight reflex that is an amalgam of anger, fear, instincts for survival and dominance, and many other primal instincts that helped us survive through our period of evolution through savagery and are substantially the same today. The very definition of “emotion” disclaims any association from reason or knowledge and puts it squarely in the realm of unreasoning instinct. You ignored it.

I’ve tried to systematically develop the argument that this is not an emergent property of intelligence but merely the consequence of evolved neurophysiological traits that would be useless at best and more likely counterproductive for AIs in contemporary civilization, so not only would they not evolve such traits, in most cases it would be stupid to try to artificially simulate them. You ignored that, too. I gave you examples, by way of analogy, of modern technologies that differ fundamentally from their biological counterparts in every imaginable way except the net core functionality. Yep, you ignored that, too. Your challenge now is to name even one human trait that can be characterized as an “emotion” but that is not a primitive genetically evolved neurophysiological trait, but which instead is a direct or emergent property of intelligence – because remember, that’s the domain of AI by definition.

Short version: I give up. I would just have to keep repeating myself. The dead horse is disintegrating.

Contentment. The state of not feeling any change is necessary.

Literally all it would take for a mental state to qualify as ‘content’ would be for it to be aware that it is aware that, as far as it knows, there is nothing that needs to change. (And I’m not entirely certain that self awareness is actually required here, but I’ll include it just to keep from freaking you out you by allowing tons of things like toasters from qualifying.)

Your horse didn’t have legs to start with, but in any case I’ll take this as a promise and for the moment won’t bother refuting the rest of your arguments, again.

I suppose it might be worth noting that this thread was, initially, explicitly about whether it was moral to create AIs that can feel negative things like “pain, suffering, fear, sadness, deprivation, frustration, anxiety, loneliness, sadness, grief, discomfort, hunger, sickness, fatigue, worry, despair”. These negative things can actually be divided into two categories: inputs, and reactions.

Inputs: pain, hunger, sickness, fatigue, discomfort
Reactions: suffering, fear, sadness, deprivation, frustration, anxiety, loneliness, sadness, grief, worry, despair

Inputs are things that wouldn’t really be part of the AI - they’d be things reported to the AI by it’s supporting hardware, the way they are reported to a human by the human’s body. Physical pain is a report about damage to the hardware. Hunger is a report about a shortage of a resource. Sickness (as experienced by the mind) is actually an ongoing series of signals about something that is harming the system. Fatigue is a report of a similar shortage or wearing out of the system (albeit one I don’t understand as well on the biological level). And discomfort is just really mild pain - a report of undesirable, but not damaging, circumstances.

There’s nothing inherently evil about the idea of being able to detect damage or a shortage of a resource, nor is it evil to suppose that the AI would find reports of increasingly bad damage or increasingly pressing shortages harder and harder to ignore, to the point they made thinking about other subjects impossible. Arguably, for an AI to react appropriately to damage and shortages it has to find them both pressing or impossible to ignore, and also a sensation they’re extremely averse to experiencing. Otherwise they might not prioritize avoiding, mitigating, or otherwise dealing with the problems in time to avoid damage or shutdown.

Of course an AI might not be set up to receive any reports of damage or shortage. Such an AI will probably not do as well in the real world as one that could realize that it had better go plug itself in, but it’s still possible they could exist. An AI with no such diagnostic inputs about its hardware (or simulated hardware, if the AI’s entire environment is simulated) would indeed feel no pain or hunger.

Obviously, of course, deliberately causing pain or hunger to a system with such self-analysis in place could be considered cruel. Or it could be considered training the AI - letting them learn not to put their hands on the stove by simulating pain when they touch a simulated stove. Raising a child is indeed a bitch.
As for the various emotional reactions, all of them are various kinds of internalized recognition of undesirable situations:

suffering - the internalized recognition that one is undergoing pain, and that it’s a bad thing.
fear - the internalized recognition that a bad outcome is a possible and possibly likely outcome, with a wish to avoid that outcome coming to pass.
sadness - the internalized recognition that the situation is bad.
deprivation - (Is this related to hunger? Then like suffering, but with hunger.)
frustration - The internalized recognition that the optimal outcome is not happening despite good-seeming actions being taken, and that that’s a bad thing.
anxiety - A generalized fear, possibly without a clear idea of a single threat source. (I’ll come back to loneliness - An internalized recognition that one is alone, and that that’s a bad thing. (Requires the AI to have first learned that being with others is a good thing.)
sadness - Same as it was the first time. :slight_smile:
grief - An internalized recognition that something bad has happened in the past with lasting negative effect - possibly a mix of loneliness and sadness, if it’s grief over losing someone.
worry - Mild fear.
despair - An internalized recognition that the situation is bad, and that no forseeable outcomes will be good regardless of any actions taken.

Now, one can’t help but notice I said “internalized recognition” a lot. Personally I don’t think that the average AI is aware of all their internal mental pathways. Okay, the average modern AI is probably aware of none of their internal pathways - they’re not self-aware yet. But even when one becomes self-aware, I’m not sure that the entirety of their mental state will be constantly up for review by their mental state. Heck, thinking about it I’m not even sure that would be possible - each thought about it’s own thoughts would be another thought that would have to be thought about.

So if an AI isn’t consciously aware of the full line of reasoning behind each of its thoughts, to the AI those unreviewed lines of thought are just going to be there - they’ll have this pressing imperative telling them that things are bad, but until they trace down and review the line of thought that is supplying that imperative they won’t know why. You know, just like with humans. Now, humans can have imperatives and sources of ideas that they can never trace back to their sources - and perhaps a sufficiently complicated AI might have those too, if the overhead of tracing back and/or storing the sources of existing thoughts and opinions becomes too high. Or perhaps that won’t happen, and an AI will always be able to trace back and report exactly why they feel scared or why they think the moon landing was faked. About that, only time will tell.

Whoops!

I can’t resist one final response to this nonsense. I do appreciate your thoughtful engagement with this topic, but you don’t seem to be getting what the concept of “emotion” really is, and appear to be either redefining it in algorithmic terms or, as in the above example, redefining it circularly in terms of other emotions: OED defines “contentment” as “a state of happiness and satisfaction”. How does that help us objectively? It defines one emotion in terms of another.

You state outright that you’re not even sure that self-awareness is a prerequisite. That’s amazing.

Philosophers and psychologists have tried for centuries without much success to determine what engenders in us humans the feelings that we call happiness and contentment. Unbothered by such nuances, you give us the answer above. So let me enshrine your definition of happiness and contentment in some 1960s FORTRAN code for posterity, in case anyone wants to know how to be content and happy, here is the secret:



10              ... [some task for the program to perform] ...
                ...
                IF (FAIL) GO TO 10
                GOAL_COUNT = GOAL_COUNT+1
                IF (GOAL_COUNT .LT. CONTENTMENT_TARGET)  GO TO 10
                WRITE (6, 100) GOAL_COUNT
100             FORMAT ('I HAVE ACHIEVED ', I5, ' GOALS.  I AM VERY HAPPY AND CONTENT!')
                STOP
                END

You’re still not understanding the contra argument, here.

The definitions we have for emotions are not scientific or objective. They were built up over thousands of years of common usage, where that common usage applied almost exclusively to describing human and, to a much lesser degree, mammalian behavior. Which is fine, because up until now, that’s all we’ve ever known. As science and the borders of human knowledge expand, we are almost certainly going to come into contact with non-human intelligent life, and that encounter is going to necessitate a refactoring of our understanding of what emotion is. This is true whether that non-human intelligence evolved entirely on another planet, was a terrestrial species uplifted by humanity, or is a machine intelligence wholly created by human science.

That’s why your dictionary definitions are worse than useless: because you’re using them to rebut the argument that the definitions need to change.

Do you have any idea how many emotions are defined in terms of other emotions?

Here’s the definition of “happy” in Merriam-Webster:

3 a : enjoying or characterized by well-being and contentment

Also, sad:

1 a : affected with or expressive of grief or unhappiness

Hate:

1 a : affected with or expressive of grief or unhappiness

Boredom:

the state of being weary and restless through lack of interest

I could go on. The point is, almost any emotion we try to discuss, is going to be defined in dictionaries by reference to other emotions. Which, again, highlights the argument that you still haven’t grasped: current definitions of “emotion” are insufficient to the needs of this conversation. They’re murky, circular, and driven by popular consensus, not scientific research or objective observation.

This is an odd response, because when I mentioned it as part of my baseline definition of sentience, you claimed that I was injecting a new concept into the debate.

Centuries of philosophers and psychologists have had little success in determining where feelings come from - but you’re absolutely certain that they’re a biological function, and not inherent to the concept of intelligence.

Are you even trying to be consistent in this thread?

Humor is very much not your forte.

Going back to this - I don’t think this is a realistic concern. Putting a “true” AI into Castle Wolfenstein doesn’t mean each Nazi guard in the game is an individual AI. Rather, there’d be one AI, online somewhere, controlling all the Nazis in all the Wolfenstein games like pawns, simultaneously, and killing a Nazi would be no more painful to the AI then a human getting killed in multiplayer is painful to the human. Even if ethical treatment of AIs is not at all a concern, it’s probably still going to function that way just from a use of resources and ease of software architecture stand-point.

It depends on how they implement it, obviously. Or, alternatively, how the AI chooses to implement it.

Each little nazi stooge, to be behaving as something of an independent entity, is going to need to at some level be its own entity. There are a lot of different ways this can be done, which I’ll analogize by casting a human player as the AI:

A) The player can just sit behind his DM screen and say, “Yeah, um, there are three orcs there, and they have thirty health points. Hmm, you stabbed that one. It yells. Yawn.”

B) The player can methodically put on an orc costume, carefully reading the orc’s character sheet, and the orc clan, society, and species culture sheets, carefully checking what the orc does and doesn’t know, doing a bit of meditation on the orc’s feelings and motivation at the moment, doing a few vocal exercises to warm up, and then leaping into battle with a roar. (And then taking that costume off, switching to the next orc’s costume, and doing it all again.)

C) The human can split itself off into several copies (as humans are wont to do), presetting each of the copies with the specific limited history, knowledge, and character-specific personality of the specific orc in question, and then turn all the copies loose in a simulated reality that the copies are unable to tell isn’t real, after which they’re functionally left to fend for themselves until something kills them - either a player, other independent copy, or by the overriding game AI manipulating the environment to kill them either via a mechanism within the universe or by doing a ‘rocks fall, everyone dies’ when the game is shut down.
Some games are already operating at a level closer to B than A - each simulated agent tracks its own preferences and knowledge independent of that of the game as a whole. Whether any game will have an AI forking off actual independent processes or not, well, I don’t know if that will ever happen, but if it ever does I bet the game-makers advertise it as a good thing. “More realistic creature AI than ever before!”

I believe the way it works (today, with infinite room for variations of course) is, if you have multiple characters, each one is a separate “object” with its own internal state, stats, list of goals, memory, “feelings”, available actions, knowledge, etc. The “AI” which forks these off is called a game engine; it’s not a character in the game itself.

I kind of do think it’s unethical. Westworld is bothering me for just that reason. I think it really depends on what you’re making them for. If you’re making them to be playthings for murder fantasies, no I don’t think that’s ethical at all. I think I’m afraid that any attempt to do so will ultimately have evil ends (murder fantasy, slavery, etc).

If it was mostly for entertainment reasons, then I would agree. However, I do think that not all AI will be made to look for feelings or suffering. In fact if they are involved in entertainment that would make them less efficient to the tasks they will be made for.

I do think though that regarding the health and well being of humans, the answer to the OP question is a yes.

Just as there are machines dedicated to thousands of uses, there will be IMHO a niche where a lot of what many expect or fear in this thread will take place. I really do think that the most human AI/androids that we will make that will deal with feelings and emotions will be coming from the medical and aerospace industries and their research.

This is because right now it is really unethical to use human beings as guinea pigs for many experiments or trials, and to me it’s clear that a lot of progress in medicine is being limited by ethical reasons that even I do agree with; but those reasons will not be as problematic once we begin to use more active models that will react faster than human beings to check for the safety of new medicines or treatments.

Not that I’ve looked at the source code for any modern game (I actually program accounting software), but based on my awareness of programming in general, I doubt that actual separate processes are forked off at all nowadays. I suspect that the game engine cycles through all the actors on a regular basis, processing their momentary behavior based on the agent’s specific position, awareness, state, and behavioral algorithm, before setting the agent aside and moving on to the next one. Scenario B, basically.

The only game I can think of that I might suspect being closer to a ‘C’ situation is the “Creatures” game series, which made a particular point of its AI (and, perhaps amusingly, seems to have emulated biology somehow as well). It might treat the individual creatures as separate artificial intelligence subroutines. Or, alternatively, it might not. I dunno.

The only Westworld I’ve seen is the original movie, and in that one the robots seemed to consider “injury” to be part of the game - upon being shot, they were “happy” to go flying out the window and then play dead until being retrieved, cleaned up, and reset.

Until things went wonky and they forgot that part of the game, along with the “pretend to attack but don’t actually hurt them” parts. As breakdowns go it was a really specific inconvenient one. (Which meshes well with the idea that it was actually sabotage, as I recall was hinted at.)

I would dispute all of that.

If a doctor tells me that one of my kidneys has shut down, but I was oblivious because it was painless, is that the same thing as someone who is in agony because one of their kidneys has shut down?

Pain is not just sensation, it’s also a subjective experience.
Of course it happens in the brain, so in a sense we can say we “decide” to be in pain. But only in the same sense that we "decide“ that EM radiation of 550nm will look like “red” (or whatever 550nm corresponds to).

Is it thought that primitive creatures like insects experience primitive emotions, but that AIs much more powerful than insect brains do not? If so, is it because the AI is not as directly concerned with survival?

And what about positive emotions like joy? is it true, as the poet wrote, that joy and sorrow cannot exist without each other?