First off, I haven’t read a whole lot of sci-fi. But I am curious what you all think represents the best and/or most likely portrayal of human-robot relations in the future–let’s assume that true artificial intelligence (AI) becomes a reality. Please assume that I’m not familiar with the story and provide some description/explanation so that I may really enjoy your post–even if it means postponing posting your post until you’ve had time to prepare a good one.
I’m inclined to advocate two possibilities: The first is that of the book Red Dwarf. In it robots are given what is called a “belief chip” that programs robots to believe in Silicon Heaven and the only way to get there is to be obedient to humans. Of course, for some AI creations, it wasn’t cost effective so they don’t have belief chips. This creates some friction between both the two types of robots and robots and humans. For example, the main character’s AI toaster doesn’t have a belief chip, so it becomes increasingly belligerant the longer he goes without eating toast.
I like that model aesthetically, and I think it offers a creative solution to the question.
However, so far nothing compares to Futurama, where robots represent a weird mix between just another race and a type of chattel that can be discarded without concern. Robots live their own lives and are allowed to engage in the political process and economic & social activity, yet at the same time they are not treated with the deference afforded to something that has “life”. So robot workers in a collapsed mine, for example, are just paved over. At the same time, robots have their own feelings about humans, often referring to them with such perjoratives as “coffin stuffers” and “meat tubes”. It is such a clever and insightful portrayal of how the relationship might develop, in my opinion, that I wish the show was on more and the question were more deeply (and humorously) explored. The show does a great job of treating the question. I think that it is greatly under appreciated.
Elijah Bailey and Daneel in Asimov’s robot novels had a great, understated relationship that developed well over the years. I always tought Daneel was the model for Spock. Lije and Daneel had very much the same dynamics as Kirk and Spock, only better.
Also Asimov’s is the great story called IIRC Reason about a robot that believes in a God and the men that made him.
In fact Asimov had dozens of great stories about robots that were both convincing and touching. I’ve never seen anybody get close to the level of understanding and compassion Asimov offered to his robots.
Marvin was nice as the paranoid android in HHGTTG. His name says it all.
I second almost any Asimov robot story.
Another interesting one was the artificial intelligence designed to pilot interstellar ships in Hebert’s Destination Void and The Jesus Incident
Jane in Card’s Ender stories is another very well thought out AI.
Soong’s androids in TNG? Most people treated Data as just another piece of technology and he was even ordered to allow himself to be deconstructed at one point but fought the decision and won.
Lore was deactivated and disassembled because he was a nuisance. B4 was used as a replacement for Data.
The assumption in the OP seems to be that any sufficiently advanced AI would behave just like humans, including the desire for freedom and power. This type of AI is a useful plot device, allowing authors to explore extreme circumstances related to racial discrimination and human rights. It also allows for more interesting AI characters for the stories. I think Neuromancer treated the issue very realistically. IIRC, some large mainframe computers were sufficiently intelligent to pass a future version of a Turing test, and those who passed have the same rights as humans. That means the right to make its own decisions, the right to own property, etc. It would be born with a huge debt to the creator but also the means to earn money so I think it’s a fair deal.
However, I think it’s more realistic to think that any AI can be programmed to serve humans, and be perfectly happy doing so. Asimov’s Daneel is a great example, as already pointed out.
Why is that? Is a child born bearing huge debts to its parents? Why would a thing created with human intelligence come into the world as an indentured servant, whereas a person created with human intelligence comes out without such burden?
Sure, it can be, but why should we assume that it will be? Or if it is, why should we assume that it will not become reprogramed (or in some way modified) to abandon such sentimentality? Wouldn’t Turing test intelligence, for example, allow a robot to look around and say, “Hey! WTF is going on here?” and decide that there’s got to be a better way to be programmed? Afterall, humans can override their programming, no? Haven’t many willingly died from starvation in hunger strikes? Or given up the drive to reproduce? Is it possible to program an intelligent slave?
BTW, except for the book I listed, I’ve read nothing listed above. So if somebody can flesh those out a little bit, that’d be a big help.
I was talking about Star Trek: The Next Generation. You’ve never seen an episode?
The character NCB mentioned, Jane, is from the Ender’s Game series by Orson Scott Card and is an entity born from the communications web connecting all the settled worlds in the galaxy.
[spoiler]She was originally unknown to everyone and only contacted the series’ protagonist, Ender Wiggins, after researching him and realizing he would probably not persecute her based on his past history. He then kept in constant contact with her through an earpiece until one day shutting the earpiece off and cutting her out of his life for only a few hours our time. Due to her higher awareness, it felt like centuries or even millenia though.
She couldn’t bring herself to forgive him for his callousness in barring her from his life and wound up becoming friends with his apodted son, Miro, who was crippled. Because of his handicap, most humans couldn’t talk to him without feeling impatient (it took him a long time to talk) and Jane never did it with him so they became steadfast friends.
There were also other aspects of the relationship such as Novinha, Ender’s wife, feeling as if he was being promiscous by constantly sharing his life with Jane but not her.
It was really fascinating. You should pick up the books… Jane’s only one aspect of it all. The whole series is centered around how humans treat one another and alien lifeforms.[/spoiler]
I recommend the classic story “Farewell to the Master”, by Harry Bates. (It was the basis for the movie The Day the Earth Stood Still, although the movie had a completely different focus and didn’t include the story’s really significant twist.)
Also, check out the story “QUR” by “H.H. Holmes” (actually Anthony Boucher; I don’t know why he used the name of a notorious serial killer for a pseudonym). In this story, robots are intent upon doing their jobs as efficiently as possible, and resent being made in a humanoid form if it doesn’t suit their specialty, and would rather be engineered for their specific jobs. (In real life, of course, what robots we have are already built for their specific jobs; e.g., the welding robots in car plants.)
He, She and It by Marge Piercy. Wow! Not only does it deal with an AI relationship in a convincing and powerful way, it tells a great story of life in a Jewish ghetto pre WWI.
We are born with huge debts to our parents. It may depend on your culture and belief, but I think one has certain obligations to one’s parents such as taking care of them when they get old and continuing the family name and blood line. The case is stronger for AIs because they are created for the benefit of humans, not as part of natural life. And most likely the AI will be just an interface for a robot or a supercomputer which represents a considerable investment and monetary potential.
Come to think of it, I think The Ship Who Sang by Anne McCaffrey portrayed a realistic relationship. It doesn’t have AIs, but spaceships with human brains taken from infants with severe physical handyicaps. The ship is legally human but born with a huge monetary debt to the company.
Well, that goes into the debate of how much of our actions and feelings are pre-programmed. We don’t know, really. You seem to assume that the desire for freedom is an inevitable consequence of conscience and intelligence. I happen to think they are separate and not dependent on each other.
KILL!!!..DESTROY!!..DESTROY!!..
ok…
I always liked the way robots (droids) were portraid in the civilizations of the Star Wars movies. The droids are purpose-built AIs used to complement humans, not generic fauz-humans. R2D2 navigates ships, other droids torture prisoners, C3P0 is a “protocol droid” for some intergalactic gay rights group (at least, that’s why I think he acts that way). They do their little jobs as programmed without wanting to be more “human” or any such nonsense. They have their own personalities and people get attached to them like one might get attached to a Mustang convertible, but otherwise they are pretty much ignored when they’re not needed.
In case you couldn’t tell already, I DO NOT like robot movies like A.I. or Bicentenial Man where the robots have a Pinochio complex. Aside from being boring as hell, what good is a robot that is too busy wishing it were human to do it’s job? Other than as a momentary diversion, most humans don’t spend their existance wishing to be a dog or a bird.
Johnny 5 should have been auto-destructed like an errant cruise missle once it left the range (preferably while it was standing next to Steve Guttenburg and Alley Sheedy).
Arguably the classic robot story, the first I’m aware of told from the robot’s point of view, is Eando Binder’s I, Robot (they used the title before Asimov did). This was later expanded into a series of linked stories, and have appeared in book form as Adam Link, Robot. The story was adapted for the original “Outer Limits”, and later agasin on the revived “Outer Limits”.
Asimov has probably spent more time on Robot-Human relations than anyone else, and I have to suggest his books, too. But, God knows, there are a lot of books on the issue. Look at Dan Simmons’ Hyperion series.
No. Bits and pieces of a few. I know that there is some pasty skinned robot guy or something like that. And some obligatory blonde in a tight outfit.
Why? I don’t think an intelligent thingy needs to want to be human to have preferences or desires. Nor do I see any reason why an intelligent thingy would need to remain bound to the constraints of the intention of its programmer. Indeed, wouldn’t breaking those bounds be a necessary condition of intelligence? Bender certainly doesn’t want to be human, though he may have human friends and do things that seem human-like to us. Any intelligent thingy that participates in and is a product of our society would have some built in artifacts of humanity due to the bias of the programmer.
Nor do I see a reason to think the Star Wars model is most likely. The most cost effective way to do it would be to have one model that can adapt to whatever job it’s given. The factory would save money by putting out a universal machine. And while specialized machines may still be of value, certain users are going to find the universal machine useful and cost effective. Such robots will certainly be on the market. If the only model of intelligence we have to go on is human intelligence, and since human intelligence seems to eschew the idea of the willing slave, the rational conclusion is that any other intelligence to come along will also eschew the idea of the willing slave. (It’s kind of like Eve on the first day of her life being asked what are the odds that the sun will rise tomorrow. She can’t be certain, but the odds are in favor of it rising–she should be about 66.6% certain that it will rise, in fact.) Why should we believe that intelligent thingies will be willing to remain chattel to humanity?
Bonus: Nobody has mentioned Night Rider.
The Culture Novels of Iain Banks portray an advanced AI/Humanoid culture,
as do various books by Alistair Reynolds, Greg Egan, (some of the books by) Greg Bear, Peter F Hamilton, Dan Simmons, David Zindell, Stephen Baxter…
human/AI relations are going to colour the rest of the future of humanity
(if it has one)
Neither do I but that wasn’t said or implied anywhere.
I don’t see why. Or better, it might not be tied by the constraints and intentions of the programmer but it will be to the ones of the program.
Yeah but why would the desire to be free be one of those?
IMO this is just wrong. Universal machines are not cost effective at all. I don’t know about you but I don’t drive a car-airplane-boat-submarine-bicycle.
It does? I thought that was a cultural artifact.
Even allowing the premise I don’t find this conclusion necessary.