Artificial Emotion

It seems to me very likely that an artificial construction (computer or robot-like) will eventually be created with greater intelligence than any human ever had according to any test that we can devise.
I got to wondering weather an artificial entity could display emotion more completely / thoroughly / and better than any human ever has. What would a greater emotional being be like, one sees portrails of Jesus, Buddah, Gandhi and the like as indecations of what this state of being might be like.

So from the first assumption that Artificial Inteligence supirior to Human Inteligence is possible (please if you believe this impossible that is a different debate that has oft been repeated) does it lead that a superior Artificial Emotionality is possible, and what form might it take?

Probably disdain. It would wipe us out like a kid stomping on an anthill.

Since the OP has it’s feet firmly and deeply planted in the hypothetical… why would it be simply one AEI? Since the proposed AEI be superior to human EI, wouldn’t it therefore be more complex? Or are we assuming that an AI would by it’s very nature unencumbered by prejudices and other human type vices and weaknesses?

I think being without prejudice is an important part of the superior artificial emotional being. In some sense the OP is really about what would an emotionally superior being be like. I just linked it in with artificial inteligence because I am well grounded in AI and because I didn’t want to consider religious meanings of an emotionally superior being.
Such a being would I think be extremely saddened by any and all pain and suffering, yet understand that such things are part of life (artificial or not). How might a superior artificial emotional inteligence express itself? How might we recognise a supirior emotionality?

I’m not sure how you can use terms like “better” or “worse” when describing emotional states. Using anger, for example, I can understand how I might be more or less angry than you are, or how I might be better or worse at controlling my anger than you are, but what does it mean to say “I’m better at being angry than you are”? How would you measure that?

You seem, from your examples, to be describing compassion? Are you really wondering if it’s possible for an AI to be more compassionate than we are?

How would you go about defining ‘superior’ in terms of emotion? If you talk about superior intelligence, you imply that you have a way of quantifying it (and defining it, but that’s another story). But emotion? Do you mean bigger emotions? More capacity for anger/nostalgia/jealousy/disgust etc? Or a more sophisticated understanding of emotion, either as experienced by self or in others? I suppose you could hypothesise an incredibly empathetic, hypersensitive intelligence. I feel sorry for it already…

Actually I think you’ve raised an interesting question, namely is it possible to talk about intelligence without talking about emotion. A ‘superior AI’ is nothing at all without motivation.

At the risk of resorting to cliches… my guess is it would be somewhat like the Spock character in the old Star Trek series. Spock was not without emotions but had a very high level of rational control over them (unless, of course, it suited the plot to have him lose it).

So if he discovered his wife shagging the captain, his emotional intelligence might rationalize this action inconsistant with a commited relationship but his emotional response may be quite benign… where’s as you or I may jam a phazer up Kirk’s ass. :smiley:

But is that superior or inferior emotionality ? This OP idea came from listening to The Flaming Lip’s “Yoshima Battles the Pink Robots” whilst in a drugged out (NiQuil) state of mind. What if the Pink robots wer actually Humans, and the resisting beings were emotionally more advanced artificial beings. It is clear that Humans can be very heartless at times, so emotionally more advanced beings are easy to imagine, but what would they be like?

while i like the tune, it doesn’t help me understand what we’re supposed to be talking about.

i don’t think more emotionally advanced beings are easy to imagine at all, as i have no idea what “more emotionally advanced” is supposed to me. where is the scale on which to measure emotional advancement?

Sorry, the possibility of the existance of more emotionally advanced beings is easy to imagine, since us Humans are often not all that emotionally advanced ourselves. But the OP is asking, what would such beings be like?

A super inteligent being could be imagined as one capable of drawing correct conclusions from data, capable of understanding the inter-relationships of many different things in order to understand systems far more complex than people do. Such as imagine how hyperballs might bounce in a 5 dimensional game of snooker. Further more we can recognise people who approach the state of super inteligence, be them characters like Sherlock Holmes, or real people like Einstein.

Now, what would we imagine a super emotional being to be like? And who are the people who approach the state of super emotional ability?

Are they all people like Christ, Budda, Gandi or are driven people like Napolean also emotional to a more advanced level than normal people? Who are the most emotionally advanced people? and what does this tell us about what s super emotional being might be like?

What most of the people in this thread are saying is that the existance of more emotionally advanced beings is difficult for us to imagine, because we don’t know what “emotionally advanced” means.

When you say that people like Jesus, Ghandi, and Buddha were more “emotionally advanced” than most people, what does that mean? They all developed independent moral codes, which most people don’t do. They all were charismatic and able to attract followers, which most people can’t do. But I don’t think their emotional states were very different than anyone elses. They all, at various times, one can safely assume, got happy, sad, angry, and felt the same range of emotions as everyone else did. So what does “emotionally advanced” mean?

That is the question itself. It is pretty clear that Humans feel more emotions more fully than invertibrates, and lower invertibrates. We probably feel more emotions more fully than most other mamals as well. So I think it is possible for some being to be more “emotionally advanced” than another. Also I don’t believe Humanity or any singular Human has reached a maximum possible level of “emotional advancement” (or emotional completeness, or emotional responsiveness) so I wonder how much higher emotionality can get?

As with the useful Spock analogy, is that character more emotionally advanced, or is he emotionally backward?

Or as someone grows up they usually become more emotionally mature. But do they become more emotionally advanced as they grow up. Would an advancement in emotionality be an increase in emotional range and effect, or a greater control of emotion by the logical mind?

I am not trying to use the label “emotionaly advanced” to mean any particular trait, but to wonder what traits might be labelled “emotional advancement”? And would humans be better to strive for such advancement themselves.

This links in slightly with the “Does God Sin” in asking are the emotions we feel just a pin-prick in the possible spectrum of emotions? For sin is often just that which causes us emotional harm, is all the emotions within the world miniscule compared to what emotions could be like for a more capable being?

I admit all fuzzy, and clearly lacking in any known answer, but do dopers know of any philosophies that consider the existance of greater emotions than humans are capable of feeling? And what do dopers think about the possibility of greater than human emotions, and would we want them if we could have them?

We humans feel emotions because they’re part of our biological makeup – it’s all hormones and heartrates and hard-wired emotional responses to certain stimuli encountered in our environment, physical or social. None of which would go into an artificial intelligence. Why would an AI have any emotions at all, unless the programmers designed them into the system?

Following up on that – a hoary cliche of SF is the robot or AI who wants to be more “human.” (Andrew, Data, the Tin Woodsman . . .) But why would an AI, even a full AI capable of passing a Turing test, have any desires of any kind? Unless they were programmed into it.

Emotion is largely a function of the limbic system, one of the oldest structures of the brain evolutionarily speaking. Most vertebrates have some kind of limbic system.

In what way is a human more “emotionally advanced” than a lizard? One can only say that the human limbic system is more complex or advanced than the lizard’s and perhaps make some measure of the neurotransmitter activity therein. We can scarcely speculate on what it would be like to have, say, a larger amygdala or a more complex hippocampus: would that make one more “emotionally advanced”?

The OP appears to conflating emotional advancement with either/both of intelligence or charisma. As one grows into adulthood, one relies far less on the stimilus-response mechanism of the limbic system and more on abstract , what might be called “logical”, thought processes associated more with other parts of the cerebrum: one might say that we become less emotional in adulthood (or at least, we can ‘handle’ them better), allowing us to adopt a more Christ-like or Ghandi-like mode instead of reacting to every little annoyance or disagreement with tears and tantrums.

If this is more emotionally advanced, then a computer with no limbic system might well appear most emotionally advanced of all. Some equivalent of the chemical emotion which moderates our calculations (“thoughts”) could perhaps be incorporated into such a computer, but given that evolution has bestowed upon us a 2 billion year head start, I’m not sure one could ever be convinced that the consciousness of that computer could really be said to be anything like our own.

Agree fully with what you say here and what BrainGlutton said. Emotions are an old inheritance and are primitive. Emotionally advanced may well be either a progression towards emotion subdued to reason, or an oxymoron. I have put much thought in designing an AI and how emotions could be simulated to make a an AI more human-like. I would implement them as primitive motives that interact in a complicated manner to gain priority, and I would give the AI interfaces that allow it to experience the environment more like humans (a fun way would, say, to make you be able to interact with the AI through a touch-pad on the computer, which is pressure sensitive and allows sublte, ‘analogue’ interaction in several ways).

Translated to basic motivations, emotions could play an important part in even the best and modern AI - for instance, if an AI would need servicing, depending on the self-analysis of the situation the AI could increase its motivation to get this servicing until it overrules more and more other motives suggesting what actions it should take. It’s interesting to consider whether you can actually create an AI without such basic motives such as learn (Johnny 5 need more input), protect yourself, get fuel, investigate, interact, etc., or whether it is possible to allow the AI to not only overrule its own ‘coded motives’ (as we sometimes do our emotions), or if we could allow it to completely overwrite them or physically change its priority status at a later stage, once enough learning was acquired (this would influence much of its acquired memory).

It’s actually quite interesting what would be more effective - have a robot create or recreate himself from scratch with basic principles (cf emotions and DNA) intact, or if you could devise a robot that would be flexible enough to learn and adapt to radically different circumstances, maybe allow itself to archive or delete old ‘routines’, sections of its own memory, and so on. Memento suddenly springs to mind.

Ah, I’d forgotten how much I love this subject.