I, Robot, finally read. [SPOILERS]

SPOILER ALERT. This book was published in 1950, so I’m not going to bother boxing spoilers.

I finally got round to reading Isaac Asimov’s I, Robot. Now’s not the best time for me to write about it, since I’ve had some long days and I’ve just gotten out of bed; but…

The story is told in flashback-form during an interview with U.S. Robotics’s ‘robopsychologist’, a 75-year-old woman who was there from the start. The flashbacks trace robots from a non-vocal ‘nursemaid’ model through the Machines that ‘run the world’. Much of the book focuses on two engineers who have the job of field-testing the newest models.

Reading it now, a half-century after it was written, it seems rather quaint. The interactions of the engineers read a lot like many of the movies made in the 1940s and 1950s. The dialog is especially dated. And there’s a bit of smoking in the book. ‘Way back when’ ‘everyone’ smoked. It was socially acceptable. Doctors pitched cigarettes in radio and television ads. Though there seems to have been some stirrings by health advocates who believed smoking was dangerous, and it seemed to be common knowledge among common people (people said ‘These cigarettes will be the death of me’ in Angela’s Ashes, for example; so people seem to have equated smoking with lung problems in Ireland in the 1920s and 1930s – though the memoire was written much leter, of course) who could have seen in 1950 the anti-smoking campaign that started in the late-1960s and the near-parriah status of smokers today?

And the science of the Moon is off. Can’t think of any specific examples right now, but I can think of one in 2001: A Space Odyssey of nearly two decades later. The latest I’ve heard is that the Moon may have been formed when a planet-sized object hit the Earth and flung material into orbit. (IANA astronomer though, so I don’t know as much about it as I would if I looked into it more closely.) IIRC, Arthur C. Clarke dismissed that possibility in his book. And the scenes on Mercury seemed a bit off, though I know less about that planet than I do about our own satellite.

Basically I think the science, written way before Yuri Alexeyevich Gagarin became the first man in space, is a little naïve now. Remember that this was a time when vacuum tubes were common and the first transistor radio was over a decade away – let alone printed circuits and silicon chips! I grew up when a four-function calculator cost $99 and used eight AA cells. Personal computers? I thought Commodore 64s were amazing. Imagine how quaint our technology will seem fifty years hence.

There seemed to be little concern about dwindling resources. The robots allowed us to get more an more of what we need, and there seemed to be no end. Clearing jungles for cropland seemed to be a good thing. Watering of deserts was a good thing, too. No mention was made of the local ecosystems or how clearing jungles or greening deserts would affect weather patterns. It was all so innocent! With Technology, Man can make Utopia. Only there’s no mention of the adverse affects, nor how large the bill would be (in real life) at the end of the meal. I wasn’t alive in the '50s, of course; but from what I’ve seen in old newsreels it was an optomistic time where all of our problems could be solved by our inginuity.

And the population figures were interesting. In 2057 (or thereabouts) the population of the Earth was over three thousand million people. How many are there today? More than six thousand millions? What was the population in 1950, anyway?

I suppose that when the book was written the bits about the engineers were rollicking space adventures. In hindsight, they seem somewhat comic. That’s not to say that the situations were comic. On Mercury Our Heroes need selenium to repair their solar arrays or they will die. The advanced robot they’re testing – only one was sent with them to save costs – was sent to get some. Only it got to the selenium pool and just circled it. Why? Because of the Three Laws:

[quote]
[ul][li]A robot may not injure a human being, or, through inaction, allow a human being to come to harm. [/li][li]A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. [/li][li]A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[/ul] [/li][/quote]

By nearing the selenium pool, the robot will have violated the Third Law. But not approaching the pool would violate the Second Law. Everyone today knows GIGO. Though the concept is not mentioned my name, this is what happened. The engineers failed to make it clear to the robot that they were doomed unless the robot brought back the selenium. The order to get the selenium was not worded strongly enough to ensure that the Second Law would be ‘strong enough’ to allow the robot to risk its own existance. So the robot became ‘drunk’ (a human analogy) and circled the pool at the point of equilibrium of the Laws. The engineers use older robots that had been in storage for ten years to attempt to reason with the newer robot. The older models, because of human fears of robots, could not operate without a human rider. So they could not get the material on their own, and the humans could not ride them to the pools because their ‘insosuits’ (containing layers of cork, which is amusing nowadays) would not protect them long enough. They could not reason with the newer robot because of its ‘psychosis’.

So the situations are serious, and the logic puzzles were fun. But the human interactions sounded a lot like a WWII submarine movie with stereotyped characters.

The book is really about the interplay of the Laws. With each new generation of robots, the Laws seem to become more nuanced. It’s the robopsychologist’s job to help figure out how the robots are interpreting them and how certain situations might be interpreted. The robots cannot hurt humans. On the face of it, this seems simple. But, asks the robopsychologist, what is ‘hurting’? The assumption is that the robots must not cause or allow physical harm. But humans must occasionally endanger their own lives in order to perform certain functions and the robots get in the way. And later robots interpret ‘harm’ to mean mental harm as well as phisical harm. So they can’t so much as hurt a human’s ego. Robots must learn to equivocate. They will refuse to answer a direct question or follow a direct order (Second Law) because following it would violate the First Law as they interpret it.

So the book is about how robots evolve to become the caretakers of the human race. By the end, there are the Machines – super computers that are in essence robots. Data are fed to them, and they interpret it and make recommendations as to how to act on it. They control the world’s economy. But if they are ‘perfect’, then why are there problems? Why do plants close? Why do people lose their jobs?

[ENDING SPOILER]

It’s because they must do what is best for humanity. People are put out of work when their recommendations are implimented, but the people end up in other jobs. They don’t get paid as much, but no one actually suffers. The Society For Humanity – a Luddite-like group – has members in high places among the leading industrial companies. These people – like the people who run Halliburton, for example – want to make their Regions the most powerful after ‘countries’ are merged to create the Regions. So the SFH members would feed false data to the Machines. Only the Machines are clever. They allow the errors, which cause the people responsible to lose their positions and to be moved into positions where they can cause no harm. The situation is self-correcting.

So what are the Machines’ master plan? We don’t know, and we never find out. Whatever it is, it will be the best situation for Humanity. Maybe the end will be an agrarian society. Maybe it will be Urban. Who knows? Only the Machines. And they’re not telling.

So I found I, Robot to be a product of its day. It’s an important book because it set the stage for artificial intelligence models we still use in science fiction. The Three Laws are canon. But as a novel I found it rather dated.

You might find the latter robot books interesting; most of them are detective novels rather than collections of short stories like I, Robot. The Caves of Steel and The Naked Sun focus more on how Asimov interpreted the Three Laws and how he imagined them interacting with society. They also have a far more bleak vision of future earth – the consequences of what was happening in the early history of robotics having been realized.

And then The Robots of Dawn is full of hot robot sex and incest. Wahoo!

Criticizing the book for not predicting future trends like public attitudes towards smoking, or use of resources is, I think, not really justified. He got it wrong, but so did just about everyone. It wasn’t the focus of the book. It’s like criticizing 1950s sf for not getting the ubiquity of tiny computers and calculatiors correct, or still using cams to program space flight (as in Heinlein’s Rocket Ship Galileo or Smith’s Venus Equilateral).

The focus was the engineering and the engineers, and the logical puzzle it imposed. That was, in a great many cases, the essence and draw of science fiction in those days. Asimov was exploring the effect and the implications of his coomon-sense “three laws of robotics”, and it was a logical issue and a fascinating one, which we still haven’t addressed, but I think will become important in the very near future. The story is still timely, ion that sense.

If you want to see how this could have been translated to the screen, with the characters made more human, look up Harlan Ellison’s published but unproduced screenplay for I, Robot. One of the great missed opportunities of Sf cinema.

Don’t take it as criticism. I didn’t mean to imply that it’s not a good book because he got details wrong. I only meant that in hindsight it seems quaint. As I said, and as you reiterated, the book is about the interpretation of the Three Laws and the advancement of robotics. I was only pointing what was there as things that are only viewable in hindsight.

Jules Verne is highly regarded because he got some things right, which should show how rare a talent it is.

The stories in I, Robot were originally self-contained tales with some recurring characters. Only when they were compiled was the interview with the elderly Susan Calvin written in to tie them more-or-less together

Asimov did quite a good job defining his three-laws premise and then finding ways to screw with it, quite unlike lesser novelists who simply discard earlier concepts when they prove inconvenient.

He not only got “some things” right, he got them pretty impressively right.

His lunar capsule was made of aluminum at a time when that was rarely used, especially in such bulk. And he put the launching site in Florida, for the right reasons.

His “Robur the Conqueror” flew in a heavier-than-air craft made not of metal, but of nonmetallic composites, manufactured of fibers and binders under high pressure. He was ages ahead of engineers in this regard.

His Into the Niger Bend/City ion the Sahara features a call for help over the radio, the first time this was done.

His From the Earth to the Moon has the North winning the Civil War and reconciled with the South, even though he wrote it during the War.

His Journey to the Center of the Earth has his ex[plorers using electric lights powered by hand-cranked generators, well before Edison’s and Swann’s patents. Other varieties of electric lights and Ruhmkorff coils weren’t unknown, but they certainly weren’t commonly known.

Certainly Verne gotr many things wrong. (No torpedos or periscopes on his submarines, which show up in three of his novels), but his track record is pretty impressive, because of good research and clever extrapolation. It wasn’t just that he got the submarine basically right – he got a great many details right as well.

certainly many of Asimov’s predictions haven’t come to pass (yet, one is tempted to say), but a lot of them were predicted for the far future. On the other hand, as he pointed out at a lecture, he has people saying “pocket calculator” in “Foundation”. “And I even got the color of the numbers right,” he noted, talking about the glowing red figuresd it produced, as on the red LEDs of that era. Now, of course, pocket calculators have dark grey LCDs to use less energy. So does he get a pass for being correct for a while, or is he wrong, for not being correct now?

One problem with Asimov (and one which I think will cause his literary reputation to gradually disappear) is his general inability to invent any interesting social structure. While he could create interesting new technologies and invent entertaining plots based on them, the characters in his stories all behaved virtually the same way - like people in mid-20th century America. Asimov might be writing a story set 5000 years in the future but the family in it would act like the stepped out of a 1960 sitcom. Society has already changed enough that many of his characters seem dated and I have little doubt this will continue.

Make that almost 20,000 years into the future, as in the Foundation books.

Asimov wasn’t writing to predict future outcomes. He was playing social games in most of his books. His writing in this regard is at some contrast to others of his day, who truly attempted to predict technological advances and their impact upon a futuristic society.

One of Asimov’s overriding concerns was to show that robots, built on deterministic physical hardware, would NOT go nuts a la Frankenstein’s monster. In each of the stories the robots seemed to be malfunctioning but it could be shown that they were still bound by the laws. A robot programmed to be protective of mankind can NOT do anything but protect mankind, even if we don’t understand the exact motives for what it’s doing. This was the case in all of the stories in I, Robot*. The one about the telepathic robot showed it quite clearly.

Enjoy,
Steven

“Robbie”, the first story of I, Robot, is explicitly set in 1999. After that, he came to his senses, and generally refused to attach any absolute date to any of his stories (a very sensible approach for an SF writer, as it allows the “not yet” out on “predictions”).

I first read “I, Robot” as a kid, and enjoyed it very much. I re-read it again a few years ago; it’s dated, sure, but still holds up pretty well. The story about a slightly-altered robot slyly hiding among its identical peers on a military research space station (was it “Little Lost Robot”?) is particularly good. In all the stories, Asimov is exploring the Three Laws and how they might cause a robot to malfunction or go astray, and how human beings in a society filled with robots wrestle with their Frankenstein fears.

The Lije Bailey/R. Daneel Olivaw books are excellent, and well worth reading, esp. “The Robots of Dawn.” The short story “The Bicentennial Man” is also worth a look - it explores a lot of interesting legal/political/bioethical issues concerning robots, and has a genuinely touching ending.

And it’s MUCH better than the Robin Williams movie…

I agree wholeheartedly, ESPECIALLY about Ellison’s unproduced screenplay. Any Asimov fan - any robot fan - will be blown away. I would pay big bucks to actually see that on screen.