Robots just don't get it

Humor. Robots in movies and books don’t get it. It seems to be a very common theme.

Of course, some robots do, such as Tom Servo or Crow T. Robot. And some robots like Data are simply devoid of all emotion, humor included. C-3PO didn’t appear to have a sense of humor, but as a protocol 'droid he should at least have been able to recognize it.

I suppose that you could say humor and love are two characteristics that make us uniquely human, so writers tend to deprive their android/robot/machine characters of such features. But we’re also human because we make and appreciate art, we invent, we imagine, we create, we nurture, and we use tools. What’s the big deal about robots who lack humor particularly?

If anybody has some examples of robot/android characters who do understand humor, or robots known for some other kind of deficiency, I’d be interested to hear how the author handles them.

Well, Bender seems to know how to give and receive belly laughs.

Be realistic. (If the term has any relevance when discussing SF.) When we have the technology to build real strong-AI robots, and if we use it at all, we won’t do it because we want another way to reproduce. We already have a perfectly good way to reproduce, and several more in the works. We’ll build robots because we want slaves. Who cares if your slave knows how to laugh, or love, or paint pictures?

If you watch the DVD commentary for Futurama, they kind of comment on this. They mention that robots are built by humans, and most just see them as tools and machines. Yet, there are hobo robots, mafia robots, and a Hedonism bot. The creators mention that someone had to build these. Why would someone build a homeless robot. And for that matter…build it wit a built in bindle and hat. :confused: :stuck_out_tongue:

C3PO was known for his witty repartee wasn’t he?

ahem

I think part of it is based on the realization that humor is something that’s very difficult to explain to anybody else, especially in logical terms, (which are the only terms you could use with a computer or robot.) Of course, the same is true of art, of creativity, and of love… and I think that cultural examples of robots having a hard time understanding those or striving to master them are pretty common.

Of course, by the time we have robots that are capable of approaching the human potential in other ways, (even having a humanoid robot who can clean up the dinner dishes seems a fair ways from where we are at the moment,) there would probably be somebody who’s try to come up with a humor circuit, using a fair amount of brute force computation and some simple pattern matching. There’d be a lot of early bugs in such a system, with the robot laughing at incredibly inappropriate times or trying to make jokes that completely fall flat.

I was thinking of simply referring to the moment in ‘generations’ where Data throws Crusher off the hologrammatic ship as an example of a humor misfire, but then I realized that it was only the other characters in the movie who weren’t laughing there. Every time I’ve seen it with other people, (and sometimes even when watching the movie on tv alone,) everybody watching the movie laughs at that moment. :slight_smile: So maybe it’s more complicated than that.

But, of course, the point is that Futurama is a comedy cartoon. Those robots weren’t really built, they were drawn, and they were drawn because the notion is a funny one. And it’s a funny notion because it doesn’t make logical sense, which underscores the issue of trying to program a robot to understand humor.

Hmm… come to think of it, that might be a useful function in a humor circuit. If there’s something going on that doesn’t add up or compute in a logical sense, chances are it’s funny on some level :smiley:

And we know what you do, Robot Arm. You do that thing that has given you Robot Hair all over your Robot Palm, & you oughtta be ashamed, or at least you oughtta wash all that WD-40 off your Robot Hand.

Depends on what your slave is built to do. Modern-day assembly robots need neither love nor humor.

Who knows, in the future? In space, a human being can be manufactured at minimal cost, maintained at some expense, trained with a great deal of effort, and (in Space Is Really Big terms) can’t be drop-shipped to your doorstep intact. Robots are pretty much the opposite.

How do we know that humans just don’t get robot humor?

OK, first let’s realize that we’re only talking about strong AI robots here, robots who are capable of asking the question “What is this thing you Hu-Mans call…emotion?”

It is my contention that a strong AI robot WILL have emotions. It isn’t a matter of “programming emotions”. It’s that a robot with no emotions wouldn’t DO anything. Let’s look at Commander Data. Why would Data be curious about emotions? Isn’t curiosity an emotion? If Data had no internal desires or goals or states, he wouldn’t be conscious or sentient, he’d be a zombie.

Data desires to learn. Data desires to understand human beings. Data desires to learn about the universe. Data wants to help people. Data wants to do a lot of things. Data doesn’t want to be deactivated. Data wants to discover his origins. Data wants to learn. Data HAS emotions. How can Data be sad that he can’t experience sadness, unless he can be sad? How can be be curious about curiosity unless he can be curious? How can he fear that he will never understand fear, unless he can feel fear?

A truly emotionless robot wouldn’t bother to get out of bed every morning. A truly emotionless robot wouldn’t be curious about anything, and wouldn’t care if you shut it off, and wouldn’t care about learning. A truly emotionless robot wouldn’t do anything.

Of course you can have non-sentient robots that just follow programmed behavior, they’ll do what you tell them to, and such programs can be very sophisticated, like the terrain-following software in a cruise missile. But robots like that aren’t conscious, they aren’t capable of learning, they can’t assess their own inner states.

A robot that was capable of learning, that learned on it’s own, would have to WANT to learn. That desire to learn would be an emotion. A robot that didn’t want to be turned off or destroyed would have an emotion, the emotion of fear. Any internally generated goal that the robot had would be an emotion.

A conscious robot might not have emotions exactly identical to ours, but a robot that you could sit down with and have a cup of coffee with, like Commander Data, would HAVE to have emotions of some kind, would HAVE to have internal states not to much different than human internal states, even if the harware running his consciousness was vastly different than the soggy bag of cells in a human skull.

The cliched killer robot who wants to destroy all humans isn’t an emotionless killing machine, even though his emotions aren’t expressed on the simulated replicant face he has. Yes, the robot isn’t going to LOOK scared or angry on his plastic face as he destroys all humans, but how could he have a desire to destroy all humans without some emotion akin to fear or hate?

Our emotions don’t come from our conscious mind, they come from a simpler part of ourselves. We can recognize emotions in dogs and children, even if they can’t talk or add 2+2. So we imagine a calculating machine, that doesn’t have that animal core, and imagine having a conversation with that calculating machine. But I don’t think a robot could walk around or hold a real time meaninglful conversation in english or clean up the dinner dishes unsupervised if it didn’t have something akin to those earlier layers of the brain. A conscious robot isn’t going to be running a top-down control over all processes in it’s body and brain, much of that is going to be pushed off to automatic and reflexive processes, like in a human. And that’s where robot emotions will come from.

Imagine a humanoid robot that can walk around like a person. That robot isn’t going to monitor the position of every servo and every motor in its body, and brute force top-down control the position of it’s feet and arms and body. It will have lower level processors doing that work, just like a human or an animal walks. A bug with no brain can walk across an irregular surface, so can a lizard. They don’t do it by central control over their legs, things are decentralized. A robot that relied on top-down control wouldn’t be able to balance either. It would have to have decentralized control over it’s body position, like humans or lizards or insects. Push the robot, and it’ll balance, not because it calculates the precise motion needed to balance, but because of a network of sensors and motors that balance the body without control from the robot brain. The robot has a “desire” to balance. It has an animal self too, just like a lizard or an insect. And that animal self will be the source of the robot’s emotions.

In Robert Sawyer’s Factoring Humanity there’s an AI that kind of gets humor, but not completely.

I didn’t really offer an opinion when I made the OP, but I sorta think that any robot complicated enough to understand human speech, complete with its many nuances, has to be capable of understanding humor.

If one begins with the starting point of two fundamental principles, they end up sounding like Asimov’s laws of Robotics. First, the robot has to keep itself powered up and available (even if it loafs in bed all day). Second, it must avoid charging itself up if doing so harms its owner. Basically those are two of Asimov’s laws, Three and One.

Robots exist in present-day which run on fuel and are instructed to find it, from innocuous organic sources. One day a robot might need to unplug an appliance in order to free up a plug socket to recharge. It must decide which appliance to unplug based on whether unplugging any would do any foreseeable harm. In order to make that decision it has to know how to imagine and predict possible future outcomes of its action and weigh the likelihood of each. In fact, before it can even get to a plug socket it has to decide if there’s a safe path to it without pushing a human out of the way.

Right off the bat, that’s complicated thinking behavior, before we’ve even got to the “obey orders” part. I don’t understand why such a robot couldn’t eventually learn to recognize patterns of humor on its own.

I guess I’m fascinated why SF writers so often check the “No Sense Of Humor” column when writing about robots.

Totally aside from humor and emotions, the robots of Futurama are often subtly imbecilic- that is, they’re intelligent but they can’t see outside of their programming. Of course you can say the same for many humans…

I got a brain fart here, but how about the computer from Heinlien’s books that ran the moon? He had a sense of humor, and was trying to learn more about it. The other computers/ships in his books have the same trait. Dora threatens to do stuff to the rest of em (usually Lazuras Long) that are ment in humor.

If I ever have a robot it better have a sense of humor.

-Otanx

I contend that the Robot on the left understands comedy.

R. Dorothy Waynewright, from the anime series Big O is expressionless, & has an atonal voice, but a very subtle & occassionally quite sarky sense of humor.

Were she real, she’d make an excellent Doper.

IMAGE LINK W/S.

I always thought Robby the Robot had his moments.

Keith Laumer’ Bolos ( AI driven supertanks ) have emotions; they’re all honor and determination and devotion to duty.

I think a lot of the reason is human egotism. We don’t want to admit that a robot might be better than us, or even equal, so we come up with reasons why we humans are better than them.