Terminator 3, I, Robot & Hal 2000

In all of these films robots are shown as developing awareness and turning against humans.

Given that robots are now capable of thinking for themselves and making correct decisions (Fritz and Deep Blue being examples) is it likely that the scenarios in the films could become a reality?.

Could machines one day take the view that a human decision was wrong and decide that we are not just a threat to ourselves but a threat to them also.

I’m aware of the 3 laws of robotics but that didn’t work in I, Robot did it?

Chess playing computers aren’t really thinking. They’re more like the nervous system of an insect – simple circuitry to turn a stimulus into a response.

We’re decades away (at least) from an AI that’s capable of abstract thought. And since we don’t even have a particularly good understanding of how to build such a thing, it’s not clear what the risks could be.

We do know that it is possible for intelligence to exist absent a moral sense. Sociopaths are evidence of that.

Surely chess computers do think, they determine which is the best move to make after examining all other moves

I would also imagine that we do have a pretty good idea of how to build an AI capable of abstract thought, we just aint got FULL knowledge, yet

A book I read once said that asking “Can computers think?” is like asking “Can submarines swim?” That is; the architecture of computers and brains is so different that it becomes more a question of philosophy than of science.

Actually, as I understand it, chess computers are actually some of the dumbest machines in existence. They calculate all possible games out to some riduclous number of moves (set by the programmers not to exceeed its capacity). Whichever move it can do “next” which has the the most “good results” (taking other pieces) wins.

It sure as hell did in the book.
Another reason I didn’t like that damned movie. They take the premier example of NOT having the robot turn on humans in fiction and make it into precisely the opposite.
Adam Link in the Eando Binder short story “I, Robot” (adapted on the iriginal Outer Limits, and again in the newer Outer Limits, and from which Asimov stole his book title) didn’t turn on humans – precisely the opposite.

Then there’s Tobor in Tobot the Great
and Robby in Forbidden Planet (and its justly forgotten sequel, The Invisible Boy)

and

This statement doesn’t deserve the “surely” – there’s nothing sure about it. It depends on what you define as “thinking” and that definition (at least in the context of artificial intelligence) is not really settled.

(dammit, hit the wrong button)

Bishop in Aliens

and the Star Wars robots, and Huey Dewety and Louie in Silent Running, and…
and plenty in written fiction. But Movies tend to go for the lowest common denominator, and love Rebelling Robots. Even in the Star Trek TV series .

There have been plenty of examples that treated Robot Rebellion with some depth (Hyperion, for example), but I think it’s still too often the easy fall-back of unimaginitive minds.

I suggest you read Asimov’s story “The Evitable Conflict” (it’s in the “I, Robot” short story collection reprinted when the movie came out). In that, the massive AI networks of Earth do EXACTLY what VIKI did in the film: interpolate the existence of a “Zeroth Law”, and follow it to its inevitable conclusion.

They “think” about as much as a pocket calculator “thinks” when it calculates a square root. All a chess computer does is run through millions and millions of possible move combinations and calculate a weighting factor for each one. Then it makes the move with the highest weighting factor. It’s not conscious, or self-aware, or capable of original or abstract thought.

We don’t even know what consciousness IS. How then can we even begin to simulate it?

At least robots turning on humans is a less hackneyed SF cliche than robots wanting to be humans. I mean, why would they want to?

It makes more sense in some stories than tohers. A Robot, for example, might wat to be able to set its own goals with more information, (lacking an emotional awareness or perhaps having pre-set goals). Hence a sufficiently intelligent being could wish to be human, or at least alive.

Of cours,e then we have human Bender to blob us all to death.

I refuse to see that movie, but from what I know of Asimov, you’d probably be better off arguing potential problems from a Zeroth Law perspective using Robots and Empire, when Daneel and Giskard (well, mostly Giskard) allow Earth to slowly become uninhabitable in order to make humanity better off in the future.

No, no, no. Not even the earliest chess programs were that stupid. The combinatorial explosion of possible moves (and counter moves) would limit move exploration to just a few. Chess programs, even in the old days, did significant position analysis to look for the best moves to explore.

So chess programs have really excellent heuristics, but they don’t come anywhere near thinking. Some anti-AI people 40 years ago claimed that computers would never beat humans, having made the mistake of thinking chess required real intelligence, and ate crow - much to the delight of my AI professors.

I can dig up some references to how chess programs work if you’re interested.

I’ve read it. The robots following the “Zeroyh law” DIDN’T do exactly what the robots in the movie did – the movie robots performed a direct and oppressive coup. The robots in Asimov’s story performed a gentle, behind-the-scenes nudge. Trying to shoehorn the film into Asimov’s universe is a poor fit.

Well, I haven’t read “Robots and Empire”, but it sounds like the same thing. What happened in the “I, Robot” movie is not a mad AI going on a Hollywood psychorobot rampage, but something thematically consistent with Asimov’s robot stories. You can say it’s a particularly crappy thematic representation, but you can’t say that it has nothing at all to do with the good doctor’s original concepts. That’s all I was getting at.

Random aside: I spent most of my life and all my formative years 15 minutes south of Las Cruces.

Well, as I said, it may be a story that handles Asimov’s themes badly, but it does handle them. The reasons behind the actions of VIKI’s coup (and why the NS-5s could hurt humans, in seeming violation of the First Law) are the same as the reasons behind the AIs starting to take direct, if subtle, control of humanity in “The Evitable Conflict”: the Zeroth Law - Harm must not come to humanity as a whole. And, as with all of the Laws, a higher-numbered Law overrides a lower Law.

I read an article recently about a prototype machine gun turret. It had motion-sensors and the ability to figure out what part of a moving object was probably the head; it could figure out the distance to that head and precisely calibrate a lethal shot. The turret had a recording, IIRC, of a voice demanding a surrender, accompanied by certain behavior. The motion sensors could determine whether that demand was followed. If it was not, the turret would deliver a head shot.

Is this not an AI determining when to turn its violence upon a human?

Of course, this robot was doing exactly what it was programmed to do. Imagine a scenario, however, in which one of the turret’s creators stumbled into its field of vision–maybe he’d been out drinking–and got killed by the AI. Isn’t that pretty much the science fiction scenario?

Maybe to get into the archetypal scenario, you need two things:

  1. An AI that is buggy (the turret, for some reason, fires without issuing the warning, or fails to recognize that the target has engaged in a surrender behavior); or
  2. An AI that moves beyond its original function (because the creators installed this turret with a standard ethics package, the turret decides that the intruders are on the right side of morality, and instead turns its bullets on its builders).

Scenario 1 seems plausible, if not terribly likely (but I won’t defend that unlikeliness; it just doesn’t seem likely to me).

Scenario 2 is definitely in the future, inasmuch as we don’t really understand what comprises human consciousness or ethics yet, much less understand how to recreate it artificially.

Daniel

Well, unless someone came up with this advance without me seeing in in the literature, you imagine wrong. For decades AI involved specific tasks - chess, doing calculus, finding directions, recgnizing vision, minor problem solving like in Terry Winograd’s block world - with the thought that these, when combined, would result in intelligence. We got some really good heuristics out of AI, but not much AI.

One of the texts we used when I took AI in 1972 was from 1959. With our greater computing power and better heuristics, we can do most of the specific tasks that book called for, but we’re no closer to understanding intelligence and decision making than we were then.

One of my biggest peeves at some sf is the powerful computer who becomes intelligent from having enough computing power. Alas this is not just a scifi thing, since Heinlein committed this sin in Moon is a Harsh Mistress. Asimov’s robots and HAL were designed to be intelligent, at least.

BTW, I, Robot is one of the best arguments against life after death. Dr. A had quite a temper, and if could have come back and dope slapped those fools he would have. If only they had used Harlan’s script!

Robots have killed humans. Quite some time ago, so poor guy in a Japanese factory got killed by a robot. But neither the robot, nor your turret, was morally culpable, or actually anything close to intelligent.

The best reasoned justification for an AI turning on humans was Clarke’s in the novelization of 2001. He gave a plausible justification for HAL’s action, based on inconsistent directives. HAL was indeed right - his actions were caused by human error. Thus, his salvation in 2010 was well justified.