[quote]
Originally posted by spoke-:
**Based on the articles I’m seeing these days, it seems an inevitability that we will one day be able to create a sentient computer. (That is, a computer which is “self-aware” and whose interactions and responses are essentially human.)
Now I have several sub-topics for debate here:
[ul][li]If a computer thinks like a human, acts like a human, and reacts like a human, is it human?[/ul]**[/li][/quote]
No, it would be a sentient computer. To be a human you have to be a mammal of genus Homo. Maybe we’d have to rename the concept a “sapient”.
That’s an interesting one. Father Teilhard de Chardin, though he died before the onset of the computer boom as we know it, posits the divinely-mandated evolution of life through spheres of greater consciousness – so the arising of a new Sapient species would be part of God’s plan.
Come to think of it: can the cyber Sentient experience “natural” death? can the cyber-Sentient reproduce with the introduction of enough random recombination to validly say it has begot, not assembled, a new being? Tehse are fundamental traits of “living” beings
The cyber-“sapient” would need to be more than just an “artificial intelligence” or a self- preserving, self-reproducing program. The test could possibly be whether it arises as the result of the evolution of a designed system, into something more than the designer bargained for.
The more likely situation, as it looks right now, is for the evolution of “Sentients” not as intelligent unitary machines (a-la HAL) but as the result of the combination and evolution of networked systems (programs+machines) designed to exchange information and subroutines w/ minimal programmer intervention; where parts of the “consciousness” may be at more than one server. Teilhard (mentioned above) posits the evolution of the “noosphere”, a sort of ecosystem of consciousness that evolves just like the ecosystem of physical beings does. In that POV, consciousness has so far been mostly in the stage of unicellular or elementary colonial organisms, with maybe a primitive plant among more advanced societies. Maybe the human-created sentient will be a necessary step in propelling this evolutionary step, in which case humans-as-we-know-them, “intelligent” machines, and Sapient systems would become parts of that greater existence (just as I have in my body some symbiotic bacteria, and in every cell some mitochondria that billions of years ago were independent cells themselves). Humans would most likely continue to occupy a niche in the noosphere (The Terminator scenario, to me, is just us western humans projecting that maybe some day someone or something will do to us what we’ve done to the whales, the buffalo, the rain forest, the American Indians. World hardly ever works that way)
As to whether we can or cannot “create something greater/better than ourselves”, I believe that’s a bit misleading – we can surely create something more effective, more powerful than ourselves. But I feel that in order to qualify for the sentience/humanity/sapience “green card”, the system in question would have to go beyond task performance and demonstrate the ability to engage in MORAL (or ethical, if you will) thought.
As humans we face moral doubts, feel irrational fears and insecurities, concern ourselves with others’ opinions, wonder about these kinds of things. If our successor species – if we’re lucky there’s a successor species: else when H. Sapiens bites it (and we will someday) it’s all over – can continue to work out these issues, I do not mind if that successor is Homo Cecilius or “a life form spawned in a sea of information”.