So the Japanese have made a robot. A girl robot. Of course its emotions, such as they are, are simulated. But science fiction is full of robots that do have emotions.
For example, in AI there’s David. Alien: Resurrection has Call. Even C3PO and R2D2 have apparently real emotions. So when do simulated emotions become real emotions? A sufficiently complex program would allow a robot to display ‘real’ emotions. But are they ‘real’? Then again, what is the difference between the complex program that runs in our brains vs. one that runs on circuitry or one that uses bio-enhanced circuitry? David loved his human mother. Was it real love, like a human feels? Or was it simply a program? Could Call fall in love with a human? R2D2 had a sense of loyalty at least, and seemed to ‘like’ or ‘dislike’ others.
When do artificial emotions become real emotions? And if a robot is self-aware and has these emotions, when are they allowed free choice?
In my opinion, a good starting place would be that the machine’s algorithms for deciding how to react must be generated based on stimuli rather than programmed directly.
Gonna take a while I think, since we can’t even pinpoint the mechanism for emotion in humans yet. We know there’s brain activity, limbic activity, respiratory activity, etc. but we don’t know what relates to what. Anger and happiness, for example, will have the same result on medical tests. The only way to accurately identify emotion in humans is to ask them, “What are you feeling now?” This is the same reason why emotion in animals is still a hotly debated topic. Without verbal confirmation, we don’t know if a particular reading means a particular emotion.
But, what we do know is that humans will perceive emotion through body language, and we can replicate this through robotics, puppetry, animation, etc. Therefore, a robot acting happy will make us believe it is feeling happiness. Lindsey Lohan’s career appears to be based on this.
So, to answer your question: we have probably had inanimate objects that can transmit emotion for centuries, but as for replicating that in computers, it is still many years away.
I think the question is more like this: If artificial emotions are indistinguishable from human emotions – as observed by the human or the robot itself – are the emotions no longer artificial? It’s not so much a technical question – when will this happen, or if it can happen – but a philosophical one.
Emotions are (most likely) just our word for “instinct” in reference to humans. They’re the result of evolutionary forces to produce a normative behavior that brings success for further reproduction without the person having to rationally think out what sort of behavior is likely to work for him and/or the group. I’m not sure that this gut instinct approach to living is actually better, though most people think that it is and you are amoral if you behave appropriately due to logic rather than due to emotional forces.
Personally, I think that the way that true AI will be invented is by simulating evolution in a simulated environment. As such, it would be fairly likely that something similar to instinct would appear. If, however, AI is artificially manufactured, it would be difficult to predict how different they would be from us.
Ah, a great question. We often discuss this over at the Matrix boards at IMDB:
If it looks real, tastes real, feels real, smells real, and sounds real, isn’t that real? By definition, whatever we take into our senses is “reality.” If our senses were fooled, does that lessen the reality of the experience?
There’s no consensus answer there either, but I would refer to Descartes’s “I think, therefore I am” argument: Simply because my senses register it, does that mean whatever it is is real? This table, if I moved into another room, would it suddenly no longer exist? Does reality form around me as I move around? So, Descartes is saying: reality is more than what we take in with our senses.
Therefore, a robot who perfectly mimicked happiness would still not be happy, although observers could be tricked into “seeing” happiness.
Hell, you can’t get even get people to agree whether animals feel emotions. Maybe cats and dogs at most. But certainly no animal that we eat. And certainly nothing beyond rudimentary fear. Right? :rolleyes:
Maybe “emotion” is just one of those things we like to attribute – if not exclusively, then at least primarily – to humans to make ourselves feel special, not entirely unlike “soul” or “free will” or “sentience”, all topics that come up in regard to other lifeforms.
With ourselves and other humans, we observe certain behaviors and responses and attribute them to inner workings of the mind we do not yet fully understand; we assume we possess these special mental states because human culture has bred us to treat them as special classes unto themselves, distinct from simpler reflexes such as blinking or jerking away from pain. Then we assume other humans share these same internal states because they look and act like us.
With animals, our ignorance regarding their thought processes has sometimes allowed us to all-too-conveniently dismiss their potential for emotional complexity.
But we’ll have no such luxury with AI that we ourselves program and raise from infancy. One day, sometime long after we breach the uncanny valley, we may very well create artificial lifeforms that exhibit emotional responses indistinguishable from ours.
At that point, I think the question to ask would not be whether their emotions are as valid as ours, but whether ours were that special to begin with.
If a robot (and I’ve been using ‘robot’ as a catch-all term) is self-aware, then would it be aware of its ‘feelings’? Humans, and yes, animals, have physiological responses to stimuli. The heart of a person in love might beat a little faster when he or she sees his or her Significant Other. There’s a burst of adrenaline when someone is sliding down a cable with his jeans, and he hears the sound of ripping fabric. How would a robot register love or fear? (Unless, maybe, they have a simulated heart so that we humans can feel more comfortable.) In the case of fear, perhaps the processors can devote more time to assessing the situation and ‘trying anything’ to resolve it, at the expense of some other function(s). While that might not be identical to what happens in a human, it seems an appropriate ‘fear’ response. A sufficiently advanced robot might ‘grow accustomed to one’s face’, as it were, and devote idle time to thinking about that special human (or other robot). Is that ‘love’, assuming that this robot is at such a stage that it not only mimics life but is by any measure ‘alive’?
Perhaps it comes down to Descartes after all. ‘I think, therefore I am.’ But even in humans, do we think? Or do we just think we think? What is self-awareness?
Oh, what, now you’re saying animals can’t feel in heaven either? I’d watch out for PETA-supporting humanoids displaying physiological responses indistinguishable from human anger if I were you.
Thanks for posting this Johnny, these questions are exactly why I tried to convince a few of my close friends why “AI” was not a total crap movie. Very thought provoking.
I think that if an AI’s emotions effect it’s decision making process, then they should be considered real. Also, can these emotions over ride it’s original programing? If so, that would be kind of scary as I don’t want to be a loyal servant to our Robot Overlords.
I don’t see why these things can’t be simulated. Fear you’ve done. Perhaps romantic love could be simulated with an increased state of pleasure (it would not be unreasonable to give the robot a preference for pleasure or at least an avoidance of displeasure, something that has presumably aided our own evolutionary survival), a heightened general state of arousal (similar to fear, but with more focus on one particular subject), a diversion of processing resources to the subject at hand, some parallel of lust if this android is capable of reproduction (perhaps readying a code transmission and/or disabling the firewall and increasing access to stored raw materials…shit, this is turning me on), etc. That’s a far-from-perfect simulation, of course, but that’s why we’re not “there” yet.
In other words, we may one day be able to perfectly simulate the appearance of love in its myriad forms, but we still won’t know whether we’ve also discovered the cause of human love – it’s possible that our sensations of love are nothing more than the result of complex neurological “algorithms” processed using cellular communications instead of fancy electronics, but having an android that works this way doesn’t that we do too. It may be we’ve simply created an equally valid, but different, form of life.
But if our ultimate goal is to know ourselves…
Yeah, exactly, except even if you take the “I am” part for granted, there still remains the question “Why and how do I think?”.
Even today’s programs can be made “self-aware” by simply having the program check for its own existence and well-being; isn’t this what file integrity checks – the kind an installer performs on itself after a long download – are? “I exist”, the program tells itself, “and my components seem ok”. But that kind of self-awareness is uninteresting to us because the program, in comparison to a human, is so simple and incapable of asking more intelligent questions about itself or the universe as a whole. Slightly more advanced than that is the AI in gaming today, creatures that respond to stimuli and know how to flee from danger, protect themselves from injury, and even aid their allies and assist with the continuation of the species as a whole. Every real-time strategy game does this.
Could sentience be a matter of degree, of level of algorithmic complexity, rather than a black/white, yes/no deal? Is a dog self-aware? An elephant? A human fetus? A baby? A cockroach? A cell? They all have different degrees of behavioral complexity. They all process and react to stimuli, and in that sense perhaps they could be said to “think” if we use the word as a synonym for “process” – and if we don’t, why not? Is there a fundamental difference between a human thought process and a computer algorithm aside from the raw materials behind each and the complexity of the logical flowchart each follows?
Except that the “Is AI sentient?” question is as old as sci-fi itself and that particular film brought nothing new to the table except pretty pictures and a creepy kid. It might come across as original if you’ve never seen other sci-fi, but otherwise…
Of course, the same criticism can be applied to most movies I loved The Matrix myself, but I’m sure its philosophical ideas have been filmed to death long before Keanu.
Well, that’d depend on 1) how we program it 2) how much leeway we give the robot to modify its programming and 3) unpredictable environmental factors like software bugs or an electrical problem that affects only the harm-no-humans failsafe.
Speaking of trite sci-fi memes, “AI gone out of control” is almost always the answer to “Is AI sentient?”
Contrary to the Asimovian idea of AIs being possessed of emotionless mathematical machine logic, it may end up being easier to program behavior similar in effect to emotions, and then work up from there. For example, the algorithms that produce high priority avoidance of walking off cliffs or touching fire might be called “fear”, whatever their programming basis.
So start with the basic set of “reptilian” imperatives: fear, (avoid danger), hunger (obtain nourishment), lust (seek compatable algorithms to recombine into new experimental traits), and aggression (counter attack against danger, obstacles to one’s goals, etc.) Work up to more refined “mammalian” traits: friendship/loyalty (cooperation with like units), love (investment in progeny and special partnerships), etc. Once you have the equivalent of a very smart robot animal, then you can try for personhood: the self-modeling called the ego, etc.