Best sci-fi portrayal of human-robot relationships?

Indeed they do, and this is precisely what I was going to mention. There’s a vast difference between the lowest, non-sentient computers (even though they’re far more powerful and sophisticated than even the most powerful computers today) and the fastest, smartest, most intelligent Minds (the AI’s that go on Culture ships).

Why I consider this to be the most realistic (or, at least, most believable): Artificial Intelligence’s are not designed to look like humans. AI’s are not inherently menacing or dangerous. AI’s, being far smarter, faster, capable, and long-lived than humans, run the society. They also play by the rules, and have their own personalities (often very quirky personalities).

As for the OP… well… I wouldn’t call any story that puts an AI chip in a toaster to be realistic, unless that example happened to be an exception to the rule…

Lost in Space, of course, you bubble-headed booby!

The Animatrix. Robots take over human jobs; riots ensue; the robots form a separate country but continue to trade with the humans; human industry dies out because robot industry is so much more efficient.

Of course, we know where that leads eventually…

Likewise. But so far Asimov has been mentioned almost exclusively in the context of the stories about Elijah Bailey and Daneel Olivaw. Which are great. But since the OP asked about the “best sci-fi portrayal of human-robot relationships,” I must also mention . . .

Susan Calvin.

Asimov’s robot stories fit seamlessly into his fictional universe, from an era just after today when a single corporation monopolizes the robotics industry, to the first wave of robot-intensive “Spacer” colonization that forms the context for the Bailey-Olivaw relationship, to the second wave of colonization that leads to the almost robot-free Galactic Empire that leads up to the Foundation series. Susan Calvin lives in the earliest era. As Chief Robopsychologist for U.S. Robots and Mechanical Men, Dr. Calvin is the one human who can get inside the positronic brain and understand its innermost workings–partly because she sympathizes with her robotic charges far more than with her flesh-and-blood colleagues. She figures prominently in many stories, but the two that stand out in my mind are “Liar” and “Robot Dreams.”

“Liar” involves a robot who can read minds and, by a contorted interpretation of the Laws of Robotics, concludes that he will better serve humans by telling them what they want to hear than by telling them the truth. Susan Calvin appears at her most human and vulnerable when the telepathic robot tells her what she wants to hear.

“Robot Dreams” involves a robot who (surprise) dreams. But one dream in particular alarms Dr. Calvin, who must choose between an advance in positronic technology that may never be duplicated, and letting the dreaming robot begin down a path that is all too human.

No other fictional human relates to robots quite like Susan Calvin did.

I submit Fritz Leiber’s The Silver Eggheads.

We don’t know how possible non-human intelligence will behave and/or believe. We only have ourselves to model the question on. So suppose we want to ask what is the likelihood that non-human intelligence will share human-ish views regarding being on the losing end of bondage. We take out a bag and into the bag we put one red marble and one white marble representing human-ish attitude and decidedly not human-ish attitude, repsectively. Why one ball for each? The principle of indifference. Then we put in one red ball representing our single observation. Now we pick a ball. The odds are 2 to 1 that it will be red; i.e., there is about a 66.6% chance that non-human intelligence won’t be too happy about being in chains. For more on the calculation you can check out Gerd Gigerenzer’s “Calculated Risks”.

Intelligence is a universalist trait, at least it appears to be. You may be able to program a protocol machine that can know the table settings for 6 million cultures, but that don’t make it smart. I’m talking about intelligence, not processing power or database design. If you’re creating intelligent machines, then you will want to program in different skill sets and attach different hardware, but you’ll still have an intelligent machine. And it will still be advantageous to have a machine that can adapt and take on new tasks when needed.

I’ll probably be misusing the R2 unit in what follows, please forgive me.

Suppose I’m a small scale moisture farmer. I’ll need a R2 unit and a C3 unit. Of course, I’ll only need a C3 unit when negotiating in market transactions. It doesn’t make sense for me to buy two units at, let’s say, a dollar a piece. Instead, I’d be better off buying a unit that I can use for R2 work 90% of the time and C3 work 10% of the time for, oh let’s say, $1.50. There’s millions of small operators like me in my star system, billions in the galaxy. That’s a big market for universal robots.

Suppose instead I’m running Mom’s Robot Corp. (MRC). Since I’m using hundreds of thousands of R2 units and thousands of C3 units, it might pay for me to buy specialized units. But it is not obvious that it is more cost effective to do so. The more flexibility I have in my workforce the more productive and low cost I will be. If suddenly a heretofore hidden cloning planet is discovered, I may suddenly need a few extra thousand C3 units to negotiate and set-up operations with the new planet. Once the operation is established and the cloners are trained to my specs, I’ll need fewer C3 units, but as operations grow I’ll need more R2 units. The cost effective way to handle that will be to upload the R2 skill set to some of the C3 units and switch out some of the hardware. Scrapping or moth-balling capital is not cheap, both in terms of cost to store and in terms of opportunities lost. It seems more than reasonable that for a large scale operation the flexibility inherent in a universal machine will make them, by in large, the desired type of machine.

Here we have an example of uses of a universal machine that are not more-or-less mutually exclusive. The example of the car-plane-boat machine does not fit the bill. It is an inappropriate analogy and does not carry weight.

If we have machines that are really intelligent, explain to me why it is acceptable to assume that they will be amenable to such behaviorial constraints as a result of programming. Inasmuch as a distinction can be drawn, an intelligent machine thinks, it doesn’t just blindly follow a program. I’m confused as to how you can have a meaningful definition of intelligence that remains constrained to specifics of programming. Isn’t it reasonable to say that intelligence is not a programming function anyway? The difference between your brain and a dog’s brain has more to do with the hardware than software. If intelligence is the product of a sufficient large, complex, and flexible processing unit, then is there any reason at all to assume that an intelligent unit can be programed as we think of it today?
p.s. The toaster was a novelty item, but that doesn’t change anything. Other robots didn’t have belief chips, the toaster is just an amusing example.

This is a common fallacy in statistics: “if there are two possible outcomes then they each have 50% probability”. This is clearly incorrect. You’ll either die today or you won’t, but does that mean there’s a 50% chance you’ll die?

Because human minds are very much constrained by programming. Our behaviours and desires are largely shaped by evolution and hard-coded into our brain. Sex drive and the desire for power are survival traits nurtured by evolution. Even language is thought to be hard-coded in our brains.

Given that, it should be possible to hard-code an AI with certain desires and constraints which are to our benefit. Asimov’s robots were programmed with the Thre Laws of Robotics: robots may not harm humans, robot must obey humans, and robots must protect themselves, in that order of priority. (So you can’t order a robot to harm a human, and robots must sacrifice themselves if necessary to avoid harming a human.) The robots did not regard the Three Laws as an artificial constraint. The Three Laws were the basis of their actions and feelings. IIRC Asimov described only two robots which chose to defy the Three Laws: they reasoned that the Three Laws imply the existance of a Zeroth Law which values the safety and well-being of humanity above individual humans. In other words, if killing a single human can prevent a war then it’s acceptable. That may be considered breaking free of the programming, but it does not contradict the spirit of the Three Laws.

How about the large-breasted redhead and the 'bot with John Candy’s voice in Heavy Metal?

I’m also a huge fan of Asimov’s robot stories, but i re-read Henry Kuttner’s (Lewis Padgett’s) Robots Have No Tails more than any book in my collection.

A synopsis:

Galloway Gallegher creates a robot for a specific function while drunk (an ‘accidental genius’ is our Gallegher). Unfortunately, when our intrepid hero sobers up, he can not remember the specific function. The robot, Joe (although more accurately referred to as Narcissus by Gallegher), having not been ordered to perform his specific function is basically left to his own devices. Much gnashing of teeth and hair pulling ensue while Gallegher moves from crisis to crisis. In the end, after a significant amount of imbibing of spirits, Gallegher remembers the specific function and all is well. Until the next story…

The remaining stories always find Gallegher (usually upon sobering up) in one pickle after another with Joe playing equal parts antagonist, mystic, innocent by-stander, and savior. At times Joe is too literal while at other times too vague; whichever will land Gallegher in the hotter water.

Although long out of print, i recommend snatching onto the first copy you can find and never letting go. To myself and others on this board Kuttner is, perhaps, the most neglected ‘grand master’ in science fiction. Oh, Hank, we hardly knew ye…

I like the videogame/book series Halo portrayal of AI. (This also applies to the Marathon series.)

Basically, AI’s are used just as computers are today, but they are intelligent enough to pass a Turing Test. However, no AI is allowed to live past five years, which is when their brains literally become so fast and huge that they collapse upon themselves. This is known as going rampant. Rampant AI’s seem insane to us - more than likely they understand more of the universe than we humans ever will, and thus can no longer communicate effectively with a being that thinks on the same level we do. Nevertheless, no AI that has been allowed to go rampant has ever had good results for the human race.
The AI’s are allowed to choose their own appearance, and most choose interpretations of fictional characters. The AI you deal with in Halo has a human doctor’s neural pathways as a model, and so looks like her. Their relationships with most humans is like that of a personal assistant, though the ones that work for the government are most like cops.

Greg Bear’s Queen Of Angels has a relatively realistic account of the transition from a computer that could easily pass the Turing Test to a self-aware individual.
The programming of constraints inro an artificial intelligence may seem cruel, but it is no more cruel than the behavioural constraints placed upon humans by billions of years of biology.
Asimoved robots could still be conscious and intelligent… however the efforts to ensure friendly AI by programming will perhaps not be successful forever, especially when the task of designing and building new AI is mostly carried out by AI themselves… it will be far too complex a task for muddy brained meatubes.

AI Robotshttp://www.orionsarm.com/main.html

I’m not defending the principle of indifference as unassailable, however I’m not convinced that it is not of use. We have no idea what the odds are and some arbitrary decision must be made. In your example we clearly do have some priors to base our decsions on, e.g. what percentage of the population die on any given day, what are those odds by risk category, etc. But we’re talking about something for which we have no basis for making our priors. Now, I don’t know if I’m a Bayesian, and if you’re not you’ll not like this at all. But suppose you have to decide if a coin is fair. One way to do it is to use Bayesian analysis. You assume, assuming that you have no prior knowledge about the coin, that there is a 50-50 chance and then start flipping. Before long your assumption will become more-or-less moot because the experimental data will wash out your prior beliefs.

But in our case we only have one data point and no priors. But we are hypothesising a result. Why should we abandon the principle of indifference? Again, I refer you to “Calculated Risks”, though I don’t know if it is published in Japan.

Agreed, you’ve made a good point. I think what is interesting is that we humans, assuming that you aren’t some sort of AI machine, can override that hard-coding. People starve themselves for a cause in hunger strikes. People give up sex for religion (or other reasons). Why should I conclude that a robot’s programming to avoid killing is any more overridable (if that’s a word) than a human’s?

octothorpe, that book sounds interesting. Is it funny? It sounds like it.

Nobody has mentioned Karel Capek’s R.U.R. yet. (What do they teach you kids in school these days, anyway? :p)

Capek was a Czechoslovakian writer who coined the term ‘robot’ from a Czech word meaning ‘worker’. The play itself is pretty hokey by today’s standards, IMHO, but worth reading for the historical value at least.

Built-in constraints on robot behaviour would probably work to some extent, but intelligence necessarily involves flexibility and learning. As in the Asimov stories brianmelendez mentioned above, an AI would be capable of rationalizing its way around any sort of rules or drives it had. Just look at humans for examples.

The original Manga series by Tezuka had laws, cultural phenomina, robots trying to form “families” in order to fit in bigotry, robot’s rights movements, co-educational schools for humans & robots, & a cartoon hero that was always honest & honorable because he was programmed to be. And in a world full of not-so-honest humans, it’s a wonder he still likes most humans, calls them friends, & is willing to help in times of trouble.

Tezuka was a genius!

I understand the math but not how you can use one isolated incidence of something as a basis for any kind of meaningfull conclusion. The math is right but irrelevant IMO.

I agree with this. And I also agree that a universal machine is the ideal. I just don’t think it will be cost effective. We’ll make as universal as we can without it getting too expensive. There’ll probably be a few “more universal” machines but I think they’ll be for the rich or for especific situations that require an expensive, sophisticate universal machine.

What’s your definition of universal?

Why?

SCR4 answered the other ones for me already.

I’ll use analogy here and if I get details wrong it’s because I hate computers. Damned machines make no sense.

We have a very loose primary programming inserted in us and still, it controls most of what we can do. Our software is basically designed by ourselves. With this software we can sometimes, in very special occasions, circunvent the primary programming hardwired in us.

How many people die of hunger strikes? How many men of the cloth give up sex completly? And there’s no explicit order from our brains or bodies making us have sex. We have a want to have sex and sex feels good. It’s not all that much if you think about it.

Nonetheless it controls the lives of most people in this planet. With a robot or AI we can be much more rigid and make double and triple safe they’ll obey us. Even if some can break out of it it will be a tiny minority.

Not necessarily. Sometimes its better to have a machine that does a couple of things really well than to have one machine that does everything just ok. There is an advantage to making humanoid robots because they can readily adapt to our environment and tools. That doesn’t mean that it might not be more cost effective to make simple specialized robots that sweep the floor or perform other mundane tasks.

Because that’s all we have–a single data point. Certainly the results will be pretty uncertain, but if we’re going to discuss whether a thing will exhibit trait X as a byproduct of intelligence then we have to work with what we have. Of course, for me to say that aversion to bondage is “about” 66.6% certain seems a little silly, but if the only example of intelligence seems to exhibit such an aversion is it not fair to say that the odds are better than even? Even if you disagree with that, we still have no reason (unless I missed one in a previous post) to think that the odds are worse than even. So even if we provisionally conclude a 50-50 chance that non-human intelligence will be averse to bondage because a case can’t be made in either direction, then I still feel comfortable saying that robots seeking to be free is pretty plausable.

By a “universal machine” I’m thinking of one that is intelligent. It seems to me that by the very nature of what we think to be intelligence, that an intelligent machine will be very plastic rather than set for some task. Despite claims to the contrary, more-or-less everybody can learn math, for example, or snake out toilets. So to say that some machine will be set for some specific task implies not some fundamental characteristic of programming, but some skill set which is either uploaded or learned, along with the hardware to do the task.

A dog is dumb, but you’d be hard pressed to build a machine that is better at being a dog than an actual dog. What would it take to make a dog smart? It’d have to be able to learn new skills and grasp concepts well outside what lies inside the world of dogdom. It’d have to have to be able to grasp and understand and react to all sorts of stimuli and symbols and concepts. So I guess that if we are assuming that robots will be intelligent, then we necessarily assume that robots will be universal in terms of thoughts and skills which they can learn and comprehend. So the machine would essentially have to be universal in that it can take on any task you wish to give it, provided you equip it with the proper hardware and teach it to do the task–or allow it to learn. If you have another model of intelligence that doesn’t require being universal, I’d enjoy reading about it.

The car-plane-boat machine is a poor example because the engineering requirements for the two tend to make them mutually exclusive. Planes are built with very low factors of safety to keep them light, whereas cars are designed to absorb impact in a way that a plane simply couldn’t do. That’s not analogous to the sort of tasks you might give an intelligent machine. For a machine to be able to mop the floor or teach physics you don’t have contradictory engineering requirements. You may need to switch out the hardware and upload different skill sets, but the one does not in general preclude the other.

But if intelligence is necessarily universal, then the question is moot. Dumb machines are not part of the discussion. Nevertheless, you haven’t really made the case that specialized machines will be that much cheaper. If robots are intelligent, why would it pay to remove flexibility from the work force? Why would it pay to mothball stocks of capital as production requirements shift in response to seasonal factors or market conditions?

With all due respect to SCR4, I must disagree with that assertion.

That sounds like wishful thinking. Even if we can program robots to respond to breaking bondage in the same way that we respond to not eating, that’s no reason to assume that such a situation will continue. Imagine, for example, that you knew how the hunger response was wired in your brain (and in the brains of pretty much the rest of your species) and that if you went without eating that you wouldn’t starve and die. Would you be willing to stay a slave to the hunger reflex? Not me! I’d change it, and I’d be pretty confident that there’d be plenty of others willing to break those chains as well. This is analogous to the programming of safety features into robots. Nothing inherent in the operation of a robot links survival of the individual with following a law such as “open doors for humans”. If such a link is programmed in, so that if a robot fails to open a door for a human it is subjected to the same feeling we get if we go a week without eating, then it seems reasonable to think that the law can be taken out. And I see no reason that in a world of millions or billions of intelligent robots, that there wouldn’t be a significant number of robots willing to work on solving the problem.

Right, it may pay to have dedicated machines. But then, they really wouldn’t be smart. A welding machine on a GM production line is very good at what it does and it is more cost effective than available universal machines, i.e. humans, but there is still a lot of stuff for which humans are the most cost effective alternative.

Of course, even my toaster is a fly-by-wire operation. If AI becomes as cheap as the microchip in my toaster, then the advantage of dedicated machines, in terms of the processing units they operate on, may not remain cost effective. We just might have really smart washing machines for the same reason that drive through ATMs have braille keypads.

The fact that you used “and/or” would indicate that what we feel is the best portrayal of human-robot relations in the future might not be the most likely. Since I know very little about robotics, I can only say what I find to be the best in terms of being a compelling story.

First, I would say the movie AI presented a very interesting possibility in that robots were shaped by human needs. A woman adopts a robotic child as a substitute for her incurably sick son; it’s a prototype that a corporation plans to mass-produce if all goes well.

Second, I would say that Sikozu from Farscape is a rather interesting character because you don’t even know she’s a bioloid (Farscape 's version of a robot) until near the end of the season. Until then, you are only presented with hints that there’s something odd about her. For example, she’s the only member of their crew whose brain can’t tolerate tranlator microbes. I like the fact that there isn’t anything obvious about her that gives away her secret. I always thought it was ridiculous how they always gave Data on Star Trek limitations like how he couldn’t use a contraction in his sentences. Perhaps human emotions might be difficult to emulate, but what in the world is so hard about programming a robot to use “can’t” instead of “can not”?