AFAIK, computer scientists are currently at an impasse in all efforts to design or create “strong” artificial intelligence. http://en.wikipedia.org/wiki/Strong_AI
SF writers, however, have played around with the idea that strong AI might emerge, quite independently of the programmers’ or system managers’ intentions, from a computer system, or from the Internet itself, by a kind of spontaneous generation. See Synners by Pat Cadigan, Callahan’s Legacy by Spider Robinson, and, of course, the Terminator movies. (Not to mention every single robot story where the robot, to everyone’s apparent surprise, develops consciousness and/or free will and wants to be “human” – RUR, Bicentennial Man, Short Circuit, too many others to mention).
Is it possible such a thing could actually happen?
I’m also curious what the answer to the question is. But your list of SF stories about this idea left out Harlan Ellison’s fiction, which inspired the Terminator movies. His short story “I Have No Mouth But I Must Scream” was particularly disturbing.
It’s possible, but the system needs to be able to alter itself in response to stimuli; it may be that complex systems such as large institutions and large computing facilities are self-aware, just in a way that only the artificial entity itself can perceive (i.e. completely unnoticed to us, because it’s too big a process. By way of analogy, consider a microscopic alien examining your brain cells; he might very well conclude that they are machines of some kind; he might even be interested to note the way they process signals mechanistically - they’re doing exactly what they’re expected to do, but he might be completely unaware that the total of that parallel signal processing job results in a conscious entity.
Computers operate completely differently than biological nervous systems. It is amazing that they have virtually no similarities despite being able to complete a small set of common tasks. Biological systems can adapt and change and have somehow generated consciousness. We don’t know exactly how that happened but we do know that computer systems don’t compare at all. That is really all there is to it. I work in IT as do millions of other people. It is all we can do to keep the glorified calculators running and talking to one another. It is a daily battle because computers fail at any logical difficulty they have not been explicitly programmed to account for. They can’t do much on their own and what little they can was built painstakingly by a human programmer working through every little detail. Computers have no mechanism to start doing stuff on their own even if such a thing were possible in a general way. This cannot happen anymore than you will find yourself trapped inside your house one day by crazed lawnmowers. They are machines that have no mechanism for doing it.
Other stories are William Gibson’s Neuromancer and it’s sequels, and the movie Ghost in the Shell. I don’t have the technical knowledge to speak about this with any authority, but it seems unlikely. Consciousness and intelligence evolved slowly in animals over aeons. There was an evolutionary push towards animal consciousness that is lacking in the internet. The idea of consciousness just spontaneously coming into being without a physical mechanism that had either evolved or been designed (by people) to facilitate it just seems to unlikely to countenance. The resemblances between brains and computers seem superficial at best, and there is much we still don’t know about consciousness.
We don’t understand human consciousness well enough to intentionally simulate it (I assume you mean “self aware” in a sense similiar to how humans are “self aware” and not a more flexible definition like Mangetout speaks of).
But we do understand some aspects of it, and any program that emulated human consciousness would necessarily have to include those aspects (e.g., similiar sensory perception to humans, some ability to adapt by modifying both its knowledge base and the algorithms that use that knowledge to make decisions, etc.). That set of things couldn’t accidently get included in a computer program–it would be a major undertaking requiring focused design and effort.
So a “strong AI” unexpectedly materializing is in the “about as probable as a roomful of lemurs on typewriters writing Hamlet” realm.
Seriously though, I do wonder if it might not be possible down the road that someone could unleash a virus programmed in such a way to almost perfectly simulate awareness, at which point one would be hard to pressed to differentiate between a “self-aware” AI and a “strong, but not truly self-aware” AI. The program would have to have a way to preserve itself from being “trapped” and shut down. I can easily imagine the scenario if and when quantum computing becomes the norm. From what I’ve read, a quantum network could handle the processes necessary for the AI to run.
I admit I know almost nothing in regard to programming though, though I try to stay in touch with the hardware side. Once octaflop and dekaflop computers start going online, who knows what sort of ‘random’ processes might be generated.
I find it more amusing to think what the SDMB would have to say for itself should it become self-aware. I’m fairly sure that it would be Pitted right away.
Imagine, if you will, the minds of thousands upon thousands of hamsters, linked together to become something more than the sum of its parts . . . :eek: