HAL 9000 Playing Chess in 2001

The article offers possible explanations that “HAL let the first space walk go as planned in order to find the best way to kill whomever was outside” and/or “HAL only decided to kill Bowman and Poole after he lip-read their conversation about terminating him.” I’m not sure if these claims can be proven.

The 1965 version of the script has two uneventful replacements of the unit with spares before HAL kills Poole on the third spacewalk. He then releases all the air out of Discovery to kill Bowman, who climbs into an emergency airlock, while HAL shuts off the life support for the rest of the hibernating crew. It’s interesting to see how this version of the script differs from the finished 1968 film. This too-on-the-nose bit from Mission Control gives a nice summary of HAL’s malfunction, but I’m glad it didn’t make it to the final film. I think a similar explanation is given in 2010, though.
*
MISSION CONTROL
Hello, Dave. I think we may be on to an explanation of the trouble with the Hal 9000 computer. We believe it all started about two months ago when you and Frank interrogated the computer about the Mission [COLOR=“Black”](script calls for flashback of Bowman directly asking HAL if there’s more to the mission that he and Poole were told). The true purpose of the Mission was to have been explained to you by Mission Commander Kaminsky on his revival. Hal knew this and he knew the actual mission, but he couldn’t tell you the truth when you challenged him. Under orders from earth he was forced to lie. In everything except this he had the usual reinforced truth programming. We believe his truth programming and the instructions to lie, gradually resulted in an incompatible conflict, and faced with this dilemma, he developed, for want of a better description, neurotic symptoms.

It’s not difficult to suppose that these symptoms would centre on the communication link with Earth* (the failing AE-35 unit is a critical piece of the comm antenna),* for he may have blamed us for his incompatible programming.

Following this line of thought, we suspected that the last straw for him was the possibility of disconnection. Since he became operational, he had never known unconsciousness. It must have seemed the equivalent to death. At this point, he, presumably, took whatever actions he thought appropriate to protect himself from what must have seemed to him to be his human tormentors.*
[/COLOR]
I now think you’re right that HAL was as surprised as the astronauts by the failure of the part, since in the 1965 script, they decide to leave the second replacement unit in place until it fails, which it does. Poole: “Hal was right all the time.” I thought HAL made up the failure, but now it looks to me like he just capitalized on it to kill off the crew.

I’m spending too much time on this, and apologize for the hijack. I think the overall importance of HAL’s malfunction is to spur Bowman on to the next evolutionary step into the Star Child / Superman by forcing him to abandon technology and encounter the orbiting monolith in nothing more than a space pod. Someone made the claim that HAL has more personality than the astronauts, who are basically low order drones. I like this idea that technology has in many ways become an obstacle to Man’s development by the time of the Discovery mission.

I think one common theory about HAL was that he was developing Multiple Personality Disorder. (No doubt there are threads here that have discussed this. Search if you like.)

The main “Good” HAL was the dominant personality until the pod/lip reading incident. The “Bad” HAL* was formed around the secret orders and started causing problems reporting the unit as bad, etc., finally taking over completely. Hence Good HAL didn’t try to kill Bowman on the first EVA but Bad HAL did on the 2nd.

The chess playing ability of HAL is nothing. It’s the Natural Language ability plus:

“The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.”

A computer that never makes a mistake. Right.

*Kubrick really goofed in not adding a mustache to HAL once he turned evil.

HAL in fact did not make a mistake. He was given conflicting instructions, and told to lie. He was quite correct - the problem was indeed human error.
While Kubrick had nothing to do with this, HAL was deemed worth of making the next leap into the “Starmind” at the end of 2010.

The linked website is very interesting. Beyond the chess problem noted here, the writer finds many places where symmetry is broken, which he finds very significant. Some of these are interesting, some might just be from production constraints.
One place I know the writer is wrong is a little section casting doubt on the scene in which Bowman returns to the ship without his helmet. This was a topic Clarke hit upon many times, both in fact and in fiction (Islands in the Sky, I believe.) During the roadshow presentation, at least in NY, you were given a little flier which justified this. I still have mine. Sometimes there are things you can’t read much meaning into.

I find it comprehensible that a computer could play chess perfectly, even if there’s not enough memory in the universe to hold all possible games and outcomes.

But I find it incomprehensible that a computer could be thought to be “perfect,” even after decades of development. Our computers have become more complex, but has this led to more perfection or less? My personal experience suggests “less,” by a long shot. The more complex, the more possibilities for breakdowns or out-n-out programming mistakes. Somewhere in HAL’s code is an accidental divide by zero instruction, and then what will happen?

Blue screen of death.

While HAL was designed by humans, it was designed to learn on its own, so it basically programmed itself. That’s what the “Heuristic” in HAL’s name is about. Back in the 60s this idea was seen as a feasible future direction of computer science. 11 years on from 2001 and computer programs are still basically only as smart as their designers, though IBM’s Watson is getting closer to the idea of HAL. HAL in 2001 was sentient being, another topic expanded further in the sequels, gifted with all the knowledge of humanity without the emotion, but being a being ultimately controlled by men, still fallible.

Wow, I had one of those, but totally forgot about it until now.

Something wonderful.

It just goes to show that back in those days, the ability to play chess and the ability to carry on natural language coversations were considered roughly equal in difficulty. Playing chess required thinking, so if a computer could learn to play chess it could be taught to do anything a human being could do.

We might have to define (or redefine) “thinking.” A large part of chess playing can be done by following a stored library of moves; the rest can be done by examining and rating potential moves based on rules of advantage. Unless those programs can also pass the Turing Test, none qualify for “thinking.”

Although I do agree that may have been the mind-set onceuponatime.

That’s creepy but fantastic. Love it!

:: golf clap ::

Not directly about HAL, but about meeting Keir Dullea, with some interesting background stuff on filming 2001: I met Keir Dullea tonight! - Cafe Society - Straight Dope Message Board

That’s the thing, they were wrong back in the 60s predicting that being good at chess would coincide with an overall advance in AI in general. No one imagined the multitude orders of magnitude increases that occurred with computer hardware (even Clarke had HAL using tubes and taking up an entire room!) The successes that machines now have in chess come from bazillion-iterations-deep brute force attacks rather than any ‘learning algorithms’. Kinda disappointing, but IBM’s Deep Blue chess computer is still essentially just an immensely fast calculator…

IIRC Deep Blue had special purpose chess hardware also, not that difficult to design. I know Belle did.

ETA: BTW, what did you think of his analysis? I think the guy is a bit wacked out, though he did see some interesting errors in the movie.

We had one around that time that also used blinking lights. Each row/column had a light and two would blink to indicate the piece to be moved. Each square on the board had a tiny hole in the center, and each piece had a pin that would go in the hole. You had to press the piece down slightly, which would be detected underneath the board somehow. Then another set of two lights would blink to indicate where the piece was to be moved. To move your own piece, you would also have to press it down before moving, and also after, so the computer would know where the piece was placed.