Robots just don't get it

Number 5 is alive!!!

We learn this because he laughs.

I remember them as being human brains surgically installed into the tanks, not A.I.'s.

No, that’s A Plague of Demons, not the Bolo books. Bolos

and he shows sadness when he accidentally crushed a grasshopper while jumping after it…and by extrapolattion learns that “dissasemble=dead”

<sadly holding crushed grasshopper> Reassemble, Stephanie, reassemble?!?
i can’t, #5, it’s dead…
Dissasemble, dead?, dissasemble = DEAD…NO DISSASEMBLE NUMBER FIVE!

when Stephanie is under attack, J5 blocks the assault

<angrily> NO DISSASEMBLE STEPHANIE!

Ah, I’d only read A Plague of Demons.

Well, I vote that Johnny 5 is alive and has a sense of humor. Also, though not a great example, I believe that Robot from “Lost in Space” developed a fairly dry bit of humor and understood sarcasm in a way few people can.

I agree completely.

No . . . if it were programmed with Asimov’s Laws of Robotics or some equivalent, it would do whatever it was told, and it’s very unlikely it would need emotions to carry out it’s tasks.

Irrelevant. Data was not designed to be a slave, he was designed by an eccentric inventor who wanted a son. I very much doubt a robot of that kind will ever be built IRL.

“Mom allways said I should make new friends.”

“Ha. Ha. Ha.” :smiley:

Actually, I would like to point out that C3PO can detect humor, he just tends to take people seriously until he realizes they’re messing with him. Also, R2D2 has a hell of a sense of humor, we just can’t understand a damn thing he ever says (file this under “Humans don’t get robot humor”). Also, I found the various BattleDroids built by the Trade Federation to be quite amusing due to their banter (also, I’ve adopted their confirmative response of “Roger Roger”).

Also, as someone pointed out, Commander Data does have a lot more emotion than he realizes, although starting with Star Trek: Generations, he has the option of having a full range of emotions including humor and fear because he installed Lore’s emotion chip (Starting with First Contact, he can turn the chip off when needed). That said, he doesn’t seem to know what to do with some of the emotions, such as when he discovers that a particular beverage is “absolutley REVOLTING!” and then asks for more of the same.

You mean Mycroft “Mike” Holmes, from THE MOON IS A HARSH MISTRESS.

Yeah, but how could you create a robot that can understand natural language and talk to you in english and can walk around and interact smoothly with physical objects, and yet this robot is a slave?

OK, you tell it that it has to obey all orders. But you can’t just “program the robot to follow orders”. What would the code for that look like? It seems to me that a conscious robot that followed orders would be a being that had a strong desire to follow orders, rather than a robot following programming.

I guess what I’m getting at is that there can be robots that “just follow orders”, that don’t have emotions, that just do a task, but those robots won’t be conscious beings. Any robot like Data or R. Daneel Olivaw or Bender that can learn, make decisions, converse in natural language, and reason will HAVE to have emotions. You won’t be able to program this robot to follow orders because this robot must be capable of learning, that is, reprogramming itself. It can’t just have a giant lookup table of english phrases and the proper response, it has to remember what you say, it has to remember what it has said, it has to remember the things it has seen.

In order for a robot to act like these fictional robots do, to understand human langauge, it would have to understand human behavior on some level. I’m arguing that our conception of what conscious robots would be like is totally flawed. You could never have a strongly conscious robot without emotions or desires, an emotionless pure intellect. As soon as you give that robot a reason to do anything other than stand there, you’ve given it emotions.

Even if all you want is to give the robot the desire to follow orders, you’ve given it an emotion. Emotions aren’t costly add-ons to consciousness, they ARE consciousness. A dog is conscious, even though it can’t talk or reason. A person with Down’s syndrome is conscious, even though their ability to reason abstractly is limited.

Yes, we could have “slave” robots, but they’ll just be more sophisticated versions of today’s automated vacuum cleaners and industrial welding machines. It seems to me that it would be much more difficult to create a vacuum cleaning robot that has consciousness yet has no emotions than it would be to create a vacuum cleaning robot that has the emotion of desiring to see the floor clean. Or you could create a robot without consciousness that just runs the vacuum over the floor without running into people, that follows a program. But a robot you can program CAN’T be conscious.

It’s a very, very long way away, but I don’t know about that.

What about Kryten from Red Dwarf, he clearly has been programmed with a cleaning fetish and has seemed to develop an inferiority complex, when Lister and Rimmer first meet him on the Nova 5, he’s obsessively cleaning the skeletons of the crew who died as a result of the crash Kryters caused by cleaning the computer with nice, hot, soapy water and a fresh wax job…

Rimmer, i believe, called him an “Android Norman Bates”
and thru constant derision and insults from Rimmer, Kryters seems to have also developed a minor inferiority complex…

and on another tangent, in the novel “Mostly Harmless” i believe Douglas Adams sort of jokingly figured out a way for robots to figure things out on their own based on the infamous “Herring Sandwich” experiments at the MMIFSAPWOTSO (MaxiMegalon Institute for Slowly and Painfully Working Out the Surprisingly Obvious)

an Adamsian robot (Mostly Harmless-vintage) can have two control states, “Bored” and “Happy”, these states are controlled by a conditioning chip that determines if the “Happiness” code makes it from one side of the chip to the other, simply program in that the preferred status is “Happy”, and the robot will figure it out how to be “Happy” on it’s own…

What, like have been no human slaves? No human fanatics, or brainwashing victims? No humans with neurological conditions that severely impair their ability to do or feel some things, but not others?

I’m just saying that the field of neuroscience is complex and not yet fully understood, and the field of artificial intelligence is in it’s infancy, at best. So it’s a little soon to say that a real AI definitely would or definitely couldn’t behave.

Marvin the paranoid android. Yes, I realize he spends most of the books/radio script being depressed, but if you remember when he’s sent out to stop the tank, you can tell he’s clearly messing with it’s puny little mind. If a cat can derive enjoyment out of torturing mice, then surely Marvin did the same with the tank.

In one of Fred Saberhagen’s Beserker stories, one of the beserkers learns humor from a comic who’s been exiled.

However, Asimov’s robots, at least, do have emotions, or the robotic equivalent. They’re “happy” when they’re doing what they’re told or see humans happy, and “unhappy” if they can’t follow out their orders or see humans unhappy or suffering, and those feelings are a direct result of his three laws. Some of the robots in his books have even more developed emotions and desires than that. Andrew Martin wants to be free, and wants to be human. Daneel cares for Elijah Bailey as a friend, and makes a special effort to protect and serve him, and also feels the same way about the robot Giskard, going so far as to tell him that if he was ordered to destroy Giskard, he wasn’t sure if he would be able to do it, even though, according to the three laws, he would have to, because Giskard is only a robot and not a human being.

give me a one paragraph explanation of humor…its not an easy subject to explain. we laugh at a lot of things that you wouldnt think are funny, well not if you had just explained humor anyway.
if you cant clearly define the many faces of humor in english its gonna be pretty friggin hard to do it in code.

Bungie’s various first-person shooters have always featured AIs with a sense of humor (sometimes quite a dark one, as in the case of the “Marathon” games). Cortana, the AI from “Halo,” is a classic sidekick- she provides formidable support to the protagonist with a side order of sarcastic banter.

She also has one of my favorite lines from Halo 2: “Me. In your head. NOW.”