I, Robot

Untrue. Doing so would result in the “owner” being deprived of his property… thus, come to harm. So the robot says no. :smiley:

I’m gonna give this movie a shot. I had ZERO hope for it until I saw the trailer. Now I’m having LITTLE hope, having had some faith restored by the fact that it doesn’t look like they salted and peppered the script with Funny Black Kid moments… worked in Men in Black, but not this.

They may have “butchered” “the” “story”, but at least it looks like they wanted to make this a more “serious” product.

'Course, that was also how it seemed they wanted to do Evolution, based on the lack of humor in THAT flick…

So… to sum up… I don’t expect much, but I sincerely hope it’s good.

R. Daneel Olivaw ends up being sort of a superrobot who sets himself up as a kind of custodian of humanity. He manages to wrap his mind around the idea that it can be good to kill a human, if it’s good for humanity. Thus the zeroth law - it’s the same as the first law, but with “humanity” in place of “human”, and overrides the first law.

The Zeroth Law is an extrapolation of the First Law made by R. Giskard Reventlov, based on the premise that the long-term survival of humanity is more important than the present well-being of any human. The Law reads:

“A robot may not act to harm humanity, nor through inaction allow humanity to come to harm.”

It also concomitantly modifies the other Laws.

Unfortunately, Giskard was not able to know for certain that an action he took would prevent the destruction of humanity and was sent into stasis by the conflict with First Law.

Some introspective robots postulate a Zeroth Law, which states that robots can not harm humanity as a whole, or, through inaction, allow humanity to come to harm. This could override the first law.

Sounds a bit like Hitler’s thinking . . .

Which, I believe, he was only able to come up with because (SPOILER for The Robots of Dawn and possibly some other later novels)

Giskard is a telepathic robot, thanks to some reprogramming from a young girl who is an adult in the story itself. Note that it’s been a little while since I read the books, don’t know them as well as some others, and don’t have any of them currently available with me, but I know that that is right.

Actually, this thread is reminding me of why I have some problems with Asimov and Heinlein, among others. But that’s probably too big of a hijack for this thread. Maybe I’ll start a new one on the topic.

Actually, I thought the whole point of that short story was to say that religion cannot be argued about in a rational manner. It shows how two sane, rational creatures can have wholly different ideas about god.

Except that QT comes to believe that humans cannot possibly have made robots on the basis that no creature can create anything superior to itself, and refuses to believe in anything outside the space station it’s on as anything other than an illusion created to comfort humans. QT clearly isn’t rational, no matter how logical its arguments may be. (In fact, when you get down to it QT’s arguments are no different to those of Bomb #20 in Dark Star, which deduces that it is impossible to tell with certainty whether anything else exists and from this reasons that it must itself be God.)

I watched the trailer online a lil while ago, and it doesn’t look terrible. The director, Alex Proyas, did the first crow movie and Dark City so I have some faith in him. Will Smith seems to be withholding most of the wisecracking Men In Black/Independence Day style he is known for.

As long as there isn’t a Will Smith song/video to go with this movie, I’ll probably go see it.

No clue. From what I can tell, however, it’ll be a miracle if Will Smith isn’t murdered by Asimov purists.

That said, if you want to see robots breaking stuff and getting blown up, I’d imagine you won’t be disappointed by it.
bamf

“Will Smith is ‘Will Smith’ in ‘Not Another Summer Movie’.”

You know what I’d like to see? Will Smith playing a villain. I’m not sure if it’d work out as well as Training Day did for Denzel, but it would certainly be nice to see him playing something other than the Wisecracking Protagonist.

bamf

So some good will come of the movie after all.

harborwolf - Alex Proyas is a fine director, but he can’t make a silk purse out of a sow’s ear. We’ll be lucky to get a pigskin moccasin.

I’d go further than that, even… They not only override all other desires, they are the foundation of all the robot’s other desires. Everything that a robot desires, it desires because of one of the Three Laws. Likewise, they’re the basis of all other robot programming: Robot programming other than the Laws themselves is implemented as a special case of the Second Law.

And it’s not really fair to include Sally (or the robots in “Let’s Get Together”, or the Bard, or Multivac) in these discussions, since they were really in a different continuity, and exist without reference to the Three Laws. But they’re still rare exceptions, and with the possible exception of “Let’s Get Together” (who might not have known what they were doing), those robots are still fundamentally good.

What irks me is the line from the trailer, something to the effect of “But what if the robot has to kill a human, in self-defense? Never thought of that one, did you?”. Uh, yes they did. If a robot had to kill a human to preserve itself, it would choose to die and not hurt the human, since the Third Law only applies if it does not conflict with the First or Second Laws. :rolleyes:

Multivac wasn’t mentioned. The supercomputer that goes awry was The Brain.

This brings to mind a question: what if a robot had to kill a human to prevent another human from being harmed?

I believe an Asimov robot considers its owner superior to other humans for the purposes of the Three Laws. For instance, a robot will obey an order from its owner above orders from other people. Therefore, a robot is capable of killing a human if doing so is the only way to prevent its owner from being killed. Of course, the robot would attempt to use non-lethal force if possible, since this would cause less harm. Also, if a robot is faced with a situation ala Spider-Man I where a bad guy drops one person and a busload of people at the same time, the robot will save the busload if saving both would be impossible, since harm to one person is less than harm to many. Of course, it’s just that sort of reasoning which allowed robots to come up with the Zeroth Law.

From here.
So they just took the name…it may or may not be a good movie on its own merits, but I don’t know if I can go and see it…

To add to what ricksummon said, if a robot had no other option but to kill one human to protect another, after taking such an action the robot will likely be driven into a mental shutdown, due to the breaking of the first law.

You’re quite right; though, to take it a bit farther, it’s very likely that a fully Asimovian robot would in fact be unable to kill one human to protect another. Not that it wouldn’t see the point, but the First Law conflict would be so great it would cease to function before it could act.
I don’t think Asimov ever suggested the Three Laws were in any way a perfect guide; most of the robotics stories are about ways in which unanticipated situations lead to strange results under the Laws of Robotics. The notion that an Asimovian robot might think it “had” to defend itself, even if that meant killing a human, is ludicrous. The idea is: they’re property. And tables shouldn’t have the power to chomp on their owners.

I showed a picture of the movie’s Susan Calvin to my mum, a lifelong Asimov reader, and she gave up on the movie right there.

I mean really. It’s like if someone had decided to sex up Annie Wilkes. It’s not just silly, it completely against character.