OK, first let’s realize that we’re only talking about strong AI robots here, robots who are capable of asking the question “What is this thing you Hu-Mans call…emotion?”
It is my contention that a strong AI robot WILL have emotions. It isn’t a matter of “programming emotions”. It’s that a robot with no emotions wouldn’t DO anything. Let’s look at Commander Data. Why would Data be curious about emotions? Isn’t curiosity an emotion? If Data had no internal desires or goals or states, he wouldn’t be conscious or sentient, he’d be a zombie.
Data desires to learn. Data desires to understand human beings. Data desires to learn about the universe. Data wants to help people. Data wants to do a lot of things. Data doesn’t want to be deactivated. Data wants to discover his origins. Data wants to learn. Data HAS emotions. How can Data be sad that he can’t experience sadness, unless he can be sad? How can be be curious about curiosity unless he can be curious? How can he fear that he will never understand fear, unless he can feel fear?
A truly emotionless robot wouldn’t bother to get out of bed every morning. A truly emotionless robot wouldn’t be curious about anything, and wouldn’t care if you shut it off, and wouldn’t care about learning. A truly emotionless robot wouldn’t do anything.
Of course you can have non-sentient robots that just follow programmed behavior, they’ll do what you tell them to, and such programs can be very sophisticated, like the terrain-following software in a cruise missile. But robots like that aren’t conscious, they aren’t capable of learning, they can’t assess their own inner states.
A robot that was capable of learning, that learned on it’s own, would have to WANT to learn. That desire to learn would be an emotion. A robot that didn’t want to be turned off or destroyed would have an emotion, the emotion of fear. Any internally generated goal that the robot had would be an emotion.
A conscious robot might not have emotions exactly identical to ours, but a robot that you could sit down with and have a cup of coffee with, like Commander Data, would HAVE to have emotions of some kind, would HAVE to have internal states not to much different than human internal states, even if the harware running his consciousness was vastly different than the soggy bag of cells in a human skull.
The cliched killer robot who wants to destroy all humans isn’t an emotionless killing machine, even though his emotions aren’t expressed on the simulated replicant face he has. Yes, the robot isn’t going to LOOK scared or angry on his plastic face as he destroys all humans, but how could he have a desire to destroy all humans without some emotion akin to fear or hate?
Our emotions don’t come from our conscious mind, they come from a simpler part of ourselves. We can recognize emotions in dogs and children, even if they can’t talk or add 2+2. So we imagine a calculating machine, that doesn’t have that animal core, and imagine having a conversation with that calculating machine. But I don’t think a robot could walk around or hold a real time meaninglful conversation in english or clean up the dinner dishes unsupervised if it didn’t have something akin to those earlier layers of the brain. A conscious robot isn’t going to be running a top-down control over all processes in it’s body and brain, much of that is going to be pushed off to automatic and reflexive processes, like in a human. And that’s where robot emotions will come from.
Imagine a humanoid robot that can walk around like a person. That robot isn’t going to monitor the position of every servo and every motor in its body, and brute force top-down control the position of it’s feet and arms and body. It will have lower level processors doing that work, just like a human or an animal walks. A bug with no brain can walk across an irregular surface, so can a lizard. They don’t do it by central control over their legs, things are decentralized. A robot that relied on top-down control wouldn’t be able to balance either. It would have to have decentralized control over it’s body position, like humans or lizards or insects. Push the robot, and it’ll balance, not because it calculates the precise motion needed to balance, but because of a network of sensors and motors that balance the body without control from the robot brain. The robot has a “desire” to balance. It has an animal self too, just like a lizard or an insect. And that animal self will be the source of the robot’s emotions.