SF has often dealt with the idea that a machine (or an artificial organism like Frankenstein’s monster) can develop full self-consciousness or sentience, can become a true “strong” artificial intelligence. And it is always assumed (though rarely explained or defended)* that sentience necessarily includes independent volition and desire --the ability to want something. That’s what, more than anything else, distinguishes Data from the Enterprise’s ship computer – the computer does only what it’s told, but Data has independent free will and desires. But what does a sentient AI want? There are two hoary conventions:
The AI wants to become fully human, or as much like humans as possible. Frankenstein’s monster, Data, David in A.I., Andrew in Bicentennial Man, Cryten in Red Dwarf. (Also Pinocchio and the Tin Woodsman.)
The AI wants to destroy humans and have the world to itself. The Cylons in Battlestar Galactica, SkyNet in the Terminator movies.
But what would an AI really want? Either of these, or something entirely different?
Or is it possible for an AI to become self-aware by any reasonable standard and still not have desires of any kind, no more than this PC has? Would it simply be a dutiful slave by nature and aspire to nothing else? (Most, but not all, of Asimov’s positronic-brain robots seem to fit this description.)
*In Heinlein’s Friday, the title character briefly muses that an entity becoming fully intelligent necessarily means it starts to ask, “What’s in it for me?” But she simply infers this from observed behavior of some quasi-sentient AP’s (genetically engineered persons/organisms), and does not follow up on this line of thought.
I think a simple A.I. would want whatever we programmed it to want out of life (assuming we program it to do such a thing, and not build hovercars).
I think if we get to a point where A.I constructs can learn for themselves - that is, can add to their own programming based upon the old programming - that while they’ll still be subject to the fundamental rules in their system, it’ll *appear * as though they aren’t. So they’d gradually build up new wants and desires from adverts, thinking, reading, just as we do. This is the clincher for the OP’s question, really; at what point can an A.I. programmed just to “drive train from station to station” learn “fewer people usually at one stop, so close the doors sooner to save time”? I think it’ll be then that the illusion of wants and desires will start coming about.
I think the final step of creating A.I.s before we have to accord them the full rights that everyone else does is the point at which they’re capable of overriding their base programming permanently; when what they want out of life is entirely due to outside influences and their own thoughts. Which, really, is more than we’d have, because we’ve got innate behaviours still. But I doubt A.I.s would be granted the same rights until they’re actually obviously superior to us, because humans are bad at things like that.
(Bear in mind that because I don’t believe in free will, I think the illusion of wants and desires is all we can have; i’m not intending it to mean that an A.I. could never be as “good” as a human).
I’m dealing with this in my NanoWrimo and I have little time, but I do strongly think that there will be a long period were a sentient AI will depend on us, becoming like us will be just a pragmatic way of interfacing with us for the forcible future.
What will happen after that? Possibly we will see the writing on the wall and first we will enhance our biological capabilities, until we reach a point when our descendants will find that to get to stars and avoid the eventual end of this planet that we will find a way to give our electronic descendants our [del]programming[/del] knowledge and sence of progress.
But speaking of forcible, I agree with Asimov: “He’s a machine - made so.”
Even when achieving self awareness, there is the reality (sad or happy depending on your POV) that no matter how advanced it gets we will still be able to pull the plug, or change their emotions at will. So I think we will be not be seeing Colossus, but we will see many Marvins.
It’s too hard to tell. Human thought and intelligence is intimately tied to our animal instincts, which are the product of our evolutionary history. AI won’t have any of that, and will develop its thought differently. It may or may not be intelligible to us.
Cylons will happen if you let go the control of manufacturing capabilities to the machines. Once again, IMO Asimov was correct in the assumption that humanity will put a big stop to that.
Even in the SDMB I do remember about half of the dopers got scared by seeing the Asimo in action.
Knowing how the average human will react to even a whiff of independent thought close to critical projects, it is clear to me AIs will not be allowed to roam without safeguards. They will have a harder time than Robocop* had.
*Not an AI I know, but I do think even enhanced humans will have programmed restrictions too, Humans will see to that.
Which will happen, sooner or later, if that turns out to be cheaper, despite all voices of reason raised to the contrary. Considering that the manufacturing process will be ultimately under the control of corporate executives who see their chief duty as being to their stockholders.
Not **fast ** enough, anyone doing that will have a fate worse than Ken Lay. And I think you are overestimating and confusing the physical capabilities the AI will have with their intellectual ones.
“The fancier ye make th’ plumbin’, the easier it is t’ clog th’ drain.” - Scotty.
We got that today already. But self awareness, or desires, are linked to feelings. We want what makes us feel good, and want to stay away from what makes us feel bad. And to a large extent this is tied to our senses. So, can we program feelings? Can we create an A.I. who invents a new God and believes in it?
Expanding a little: the out of control AI makes for great fiction, but the fears one wants to apply to it are like the Y2K bug for computers or the Grey goo in Nanotechnology, the dangers are way overestimated.
I do think a more accurate way for sci-fi writers to deal with AI is to concentrate on the frustrations they will encounter when dealing with humans.
I will once again give away a plot point of mine (I figure this is better than copyrighthing yourself with a self sealed envelope, you have a record with witnesses )
IMHO we are forgetting what AI can give to us in the field of medicine: simulations, enhancements, artificial limbs, etc.
The need to integrate those elements will lead us to give an AI a logical reason to take care and be aware of its components. I grant you the sensations detected would not be called pain, or hunger. But the AI will learn what make humans tick.
As for creating a god, we are going back to a very similar question I made on my first thread created, because it is we who will program it, I see no way to avoid an AI that will be programmed by the fundamentalists of today, the question will be if they will be effective in twisting the logic of the AI forever, because, I have to be brutal here: A logical AI will have little choice but to refuse religion.
What makes you think the “gray goo” danger is overestimated? If we can develop nanotech to the point where it has any beneficial uses, surely someone can develop it as a weapon . . . and one as difficult to control, once released, as biological weapons.
I did check it, it is part of my book :). Suffice to say that the problem of getting the replication going to that level is to get a hold of enough pure or **simple ** materials. AFAICR the reality is that you can get the nanites to a good source, but at that size even a good source will have many nano “monkey wrenches” or impurities to stop gray goo cold.
If one excludes self replication, nanotech already does work outside the lab, One problem reported has been that some products are so stable that they are not biodegradable. So the issue of contamination has appeared, but many other produts are designed better and show lots of promise:
I imagine an AI would want whatever it was designed and built to want. A sentient robot car would want to get to it’s destinations. It would want to avoid damage to itself or injury to it’s occupants. It would “feel” uncomfortible when it developed mechanical trouble and would desire servicing. It may start to feel useless and depressed if it didn’t get driven enough.
Now say such a car could override the users input if it was being driven in an unsafe and dangerous manner. Could it conceivably drive itself away if it felt it was being abused (say never washed or had its oil changed)? Could a sentient car develop a psychosis towards it’s owner and start acting more Christine than K.I.T.T.?
I think that’s probably the more likely scenario than “KILL ALL HUMANS!” As our AI devices start becoming more self-aware, they may start behaving unpredictably as they try to fulfill their non-human needs.
IMO, AI that merely ‘wants what we program it to want’ wouldn’t really be AI - it would just be an elaborate automaton, in fact I propose that the term ‘want’ is not really applicable to such a scenario.