Any 'realistic' sci-fi novels depicting what might happen if human-like AI were created?

Title says it. This is a plot I have always been attracted to, but have never encountered. If you think Asimov’s robot series then I haven’t been clear enough. For example, a story along the lines of (off the top of my head):

  1. AI created in lab
  2. Realistic discussion among scientists about when to do (morals regarding turning it off, ethical treatment of it, proper mothering/training of it, who has access to it) but also probably private actions trying to clone it and use it for personal utility (trick/enslave it to secretly write you a novel or solve an equation in your name)
  3. Eventually AI-equipped robots enter the marketplace. People own them. All sorts of moral questions, authorship questions, etc.
  4. Some robots escape owners, live autonomously. Eventually they campaign for suffrage, other rights.
  5. Robots end up being more “humane”, “conscious,” and intelligent than humans. Yet also humans become useless and can’t compete in a marketplace with more intelligent, powerful robots.
  6. Eventually war. Etc.

Has something like this been done? Any recommendations?

Bladerunner?

I assume you mean Do Androids Dream of Electric Sheep?

That’s not quite what I mean, since it jumps in fairly late in the game, and has a fairly narrow scope.

Steel Beach by John Varley touches on this theme in a way.

Your scenario is a lot like the Asimov Robot stories. Why do you presume doom and gloom for humanity and pseudo-humanity though? The end could be that we all live happily ever after.

I also don’t see why the OP excluded Asimov’s stories. I don’t think much of him as a stylist (heresy I know) but these are exactly the kinds of issues Asimov wrote about.

Having read the robot series recently, I don’t agree. Almost entirely the series consists of short isolated stories, some of which touch on my interest, but by and large are focused on some particular consequences of the 3 laws. This is interesting, but parochial. The longer stories take place far in the future, and avoid almost entirely most of the plot points in the OP I listed that I would find interesting.

(btw, I don’t presume doom and gloom for humanity – I was trying to just list one example of the top of my head of the type of plot I am curious about)

If you think the early lab prototype Artificial Intelligences would be capable of doing things such as writing novels or doing creative mathematics then I suspect you have already left the realms of the plausibly realistic.

Note, please, that I am not saying that artificial systems could never do such things (or that you could not make a good story on the basis of pretending that they could). I am just suggesting that the first technological generation, while it is still a secret development in a single lab, is unlikely to be better, or indeed, anywhere near as good, as the naturally produced thing. To think otherwise is like expecting that the first aircraft (i.e., the Wright brothers’ Flyer) would automatically be better at flying than an eagle, just in virtue of being artificial rather than natural. Today, after a great deal of rapid technological development, in many competing centers, our artificial flying machines are enormously much better than they were in 1903, but, in many respects (e.g., maneuverability, efficiency) they are still not as good as birds or bats or bees. In some respects (e.g., size, speed) they outstrip nature, and one day they may do so in other respects, but it did not happen right away, and certain not just in virtue of not being naturally produced. Why should it be different for artificial intelligences?

Of course, a lot turns on just what you understand by the very vague term “Artificial Intelligence”. On some definitions it already exists, and has done for several years now. On other definitions, it has yet to be firmly established that it is a genuine possibility, and we certainly do not know what constraints it might be subject to. Perhaps the most significant achievement of AI research to date has been to throw our understanding of notions like intelligence and mind into even greater question than they were in before.

The Two Faces of Tomorrow by James P. Hogan This book talks about developing the AI in depth and implementing it on a space station as a test. The test involves checking what happens when it perceives humans as a threat.

The Keepers of Forever by James C. Dunavant
This story discusses creating an AI for a colony ship and how it can grow into something else, but it also involves aliens that suck blood because their nanites have been programed to require it as punishment.

njtt – I don’t agree. Suppose there is a significant leap in raw computing power in the near future (not unlikely). Suppose the lab is trying to simulate the human brain exactly, neuron for neuron. Since the human brain is incredibly sensitive, slight tweaks in neuron and brain structure models could mean the difference between “utterly schizophrenic incoherent mess” to “exact simulation of human brain.” In other words, one day it could all just “click.” Then, since the computational power may mean that the simulation can run many times faster than a real human brain, it could be trivial to make it not only as smart, but smarter than a human. It could also be trivial to incorporate other desirable attributes to the model ad-hoc, such as a perfect and nearly infinite memory, etc. I think your analogy with aircraft design is completely wrong: AI improvement could be a highly non-linear process.

You asked for realistic, but perhaps I should not have have taken you at your word. AI has always been a field based more around fantasies of creating super-intelligences than on what we actually know (very limited and qualified as it is) about how real intelligence works.

Sure it could be, but all the experience we have have of other technologies suggests that it is not at all likely to be. To assume that it will be discontinuous in the way you suggest is no more than fantasy.

I have always been fond of The Moon is a Harsh Mistress. Take a ballistic computer, add more computers to it, more jobs for it to handle … and it reaches some sort of size and complexity that is like a human brain, and something happens slowly it develops consciousness and personality.

I believe that the original computer had some sort of personality emulation laid over as part of the interface [sort of like I call my computer Diogenes, and added some custom sound responses that are phrases and such. I have a friend who named his computer HAL and loaded in a lot of sound bites from 2001. His name happens to be Dave. The results can be startling.]

I think if we do actually develop true AI, it will be the result of an accident.

John Sladek’s Complete Roderick engages with some of the early AI ideas in Sladek’s usual cynical sense of humor.

This literary question is probably better suited to Cafe Society. Moved from GQ.

samclem Moderator

Not exactly the plot you’re talking about, but check out Tezuka’s *Phoenix: Resurrection. *Genuinely complex in its own right, & you don’t really have to read Phoenix: Future (which it connects with) to follow it.

A novella, With Folded Hands, by Jack Williamson, popped into mind last night when I was watching WALL-E. I don’t have a feel for how realistic it might be, but it’s certainly not Utopian.

I´m thinking that if human intelligence was really duplicated in a lab, that intelligence would probably be extremely bored and depressed after a while. It could also easily become sociopathic considering the difficulties of forming a real connection with anybody else. Kind of like a person who spends every waking minute surfing the internet. Well, apart from the fact that computers don´t need to sleep (or would a sufficiently advanced computer need that?).

Such a machine should be treated with caution, and would need a lot of psychological counseling. Hooking it up to a nuclear missile would be insane.

Permutation City covers the possibility of having whole-brain emulation on computers, at a point in time when we still don’t really understand how the brain works. (Realistically, the hardware isn’t exactly at the point where the uploads “think” at the same rate as normal humans; the emulations run at variable speeds depending on the wealth of their estate, usually around 17 times slower than normal.)

Thanks all for the suggestions so far.

njtt – you didn’t address what was so unrealistic about my example. If my description was more in line with the fantasy genre, I’m afraid 99% of the “sci-fi” genre is mislabeled. And I don’t know why you insist that aircraft design is somehow an apt analogy. In the computer age it can take seconds to design an aircraft (the hard part is building it). Technological progress is non-linear. Engineering is linear. You don’t seem to get that.

If you do not dislike Japanese animation or comic books (Manga) you need to check the original Astro Boy from Osamu Tezuka

It is for children, but it touched all the items you requested, sometimes with surprisingly very mature themes.

A more mature (if silly at the beginning) title is Chobits from CLAMP.

The Association for the Advancement of Artificial Intelligence (AAAI) has, among other interesting subjects, a reading list of science fiction dealing with AI.

If you’re really interested in the topic, you might find books about the actual research more interesting than the fictional ones. Swarm Intelligence is especially fascinating from my pov. It’s amazing to see, for example, how a couple of deliberately primitive but like ants interacting robots are able to solve problems that a single one of them - or even far more sophisticated models - couldn’t possibly work out.