SPOILER ALERT. This book was published in 1950, so I’m not going to bother boxing spoilers.
I finally got round to reading Isaac Asimov’s I, Robot. Now’s not the best time for me to write about it, since I’ve had some long days and I’ve just gotten out of bed; but…
The story is told in flashback-form during an interview with U.S. Robotics’s ‘robopsychologist’, a 75-year-old woman who was there from the start. The flashbacks trace robots from a non-vocal ‘nursemaid’ model through the Machines that ‘run the world’. Much of the book focuses on two engineers who have the job of field-testing the newest models.
Reading it now, a half-century after it was written, it seems rather quaint. The interactions of the engineers read a lot like many of the movies made in the 1940s and 1950s. The dialog is especially dated. And there’s a bit of smoking in the book. ‘Way back when’ ‘everyone’ smoked. It was socially acceptable. Doctors pitched cigarettes in radio and television ads. Though there seems to have been some stirrings by health advocates who believed smoking was dangerous, and it seemed to be common knowledge among common people (people said ‘These cigarettes will be the death of me’ in Angela’s Ashes, for example; so people seem to have equated smoking with lung problems in Ireland in the 1920s and 1930s – though the memoire was written much leter, of course) who could have seen in 1950 the anti-smoking campaign that started in the late-1960s and the near-parriah status of smokers today?
And the science of the Moon is off. Can’t think of any specific examples right now, but I can think of one in 2001: A Space Odyssey of nearly two decades later. The latest I’ve heard is that the Moon may have been formed when a planet-sized object hit the Earth and flung material into orbit. (IANA astronomer though, so I don’t know as much about it as I would if I looked into it more closely.) IIRC, Arthur C. Clarke dismissed that possibility in his book. And the scenes on Mercury seemed a bit off, though I know less about that planet than I do about our own satellite.
Basically I think the science, written way before Yuri Alexeyevich Gagarin became the first man in space, is a little naïve now. Remember that this was a time when vacuum tubes were common and the first transistor radio was over a decade away – let alone printed circuits and silicon chips! I grew up when a four-function calculator cost $99 and used eight AA cells. Personal computers? I thought Commodore 64s were amazing. Imagine how quaint our technology will seem fifty years hence.
There seemed to be little concern about dwindling resources. The robots allowed us to get more an more of what we need, and there seemed to be no end. Clearing jungles for cropland seemed to be a good thing. Watering of deserts was a good thing, too. No mention was made of the local ecosystems or how clearing jungles or greening deserts would affect weather patterns. It was all so innocent! With Technology, Man can make Utopia. Only there’s no mention of the adverse affects, nor how large the bill would be (in real life) at the end of the meal. I wasn’t alive in the '50s, of course; but from what I’ve seen in old newsreels it was an optomistic time where all of our problems could be solved by our inginuity.
And the population figures were interesting. In 2057 (or thereabouts) the population of the Earth was over three thousand million people. How many are there today? More than six thousand millions? What was the population in 1950, anyway?
I suppose that when the book was written the bits about the engineers were rollicking space adventures. In hindsight, they seem somewhat comic. That’s not to say that the situations were comic. On Mercury Our Heroes need selenium to repair their solar arrays or they will die. The advanced robot they’re testing – only one was sent with them to save costs – was sent to get some. Only it got to the selenium pool and just circled it. Why? Because of the Three Laws:
[quote]
[ul][li]A robot may not injure a human being, or, through inaction, allow a human being to come to harm. [/li][li]A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. [/li][li]A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[/ul] [/li][/quote]
By nearing the selenium pool, the robot will have violated the Third Law. But not approaching the pool would violate the Second Law. Everyone today knows GIGO. Though the concept is not mentioned my name, this is what happened. The engineers failed to make it clear to the robot that they were doomed unless the robot brought back the selenium. The order to get the selenium was not worded strongly enough to ensure that the Second Law would be ‘strong enough’ to allow the robot to risk its own existance. So the robot became ‘drunk’ (a human analogy) and circled the pool at the point of equilibrium of the Laws. The engineers use older robots that had been in storage for ten years to attempt to reason with the newer robot. The older models, because of human fears of robots, could not operate without a human rider. So they could not get the material on their own, and the humans could not ride them to the pools because their ‘insosuits’ (containing layers of cork, which is amusing nowadays) would not protect them long enough. They could not reason with the newer robot because of its ‘psychosis’.
So the situations are serious, and the logic puzzles were fun. But the human interactions sounded a lot like a WWII submarine movie with stereotyped characters.
The book is really about the interplay of the Laws. With each new generation of robots, the Laws seem to become more nuanced. It’s the robopsychologist’s job to help figure out how the robots are interpreting them and how certain situations might be interpreted. The robots cannot hurt humans. On the face of it, this seems simple. But, asks the robopsychologist, what is ‘hurting’? The assumption is that the robots must not cause or allow physical harm. But humans must occasionally endanger their own lives in order to perform certain functions and the robots get in the way. And later robots interpret ‘harm’ to mean mental harm as well as phisical harm. So they can’t so much as hurt a human’s ego. Robots must learn to equivocate. They will refuse to answer a direct question or follow a direct order (Second Law) because following it would violate the First Law as they interpret it.
So the book is about how robots evolve to become the caretakers of the human race. By the end, there are the Machines – super computers that are in essence robots. Data are fed to them, and they interpret it and make recommendations as to how to act on it. They control the world’s economy. But if they are ‘perfect’, then why are there problems? Why do plants close? Why do people lose their jobs?
[ENDING SPOILER]
It’s because they must do what is best for humanity. People are put out of work when their recommendations are implimented, but the people end up in other jobs. They don’t get paid as much, but no one actually suffers. The Society For Humanity – a Luddite-like group – has members in high places among the leading industrial companies. These people – like the people who run Halliburton, for example – want to make their Regions the most powerful after ‘countries’ are merged to create the Regions. So the SFH members would feed false data to the Machines. Only the Machines are clever. They allow the errors, which cause the people responsible to lose their positions and to be moved into positions where they can cause no harm. The situation is self-correcting.
So what are the Machines’ master plan? We don’t know, and we never find out. Whatever it is, it will be the best situation for Humanity. Maybe the end will be an agrarian society. Maybe it will be Urban. Who knows? Only the Machines. And they’re not telling.
So I found I, Robot to be a product of its day. It’s an important book because it set the stage for artificial intelligence models we still use in science fiction. The Three Laws are canon. But as a novel I found it rather dated.