Since scientists are trying to create artificial inteligence, which would mean in effect, creating a mind, would it be easier, as hard, or harder to take something already with a mind, like a dog, a cat, a monkey, or whatever, and give it sentience ?
How can you tell that a dog or cat or monkey doesn’t already have sentience?
How do you reprogram a mammal?
A very basic definition of sentience is simply knowing that you exist. Cogito ergo sum: I think therefore I am. I have no way to prove it, but to me the behavior of dogs, cats, and other animals strongly suggest self-awareness. And, supposing animals weren’t sentient, how in the world we “give it sentience?”
As for artificial intelligence it already exists. The game Halo is an example of some very decent A.I… Enemies react quickly, form teams, use flanking tactics, and behave in a reasonably sensible manner (i.e. they aren’t prone to walking off cliffs or charging blindly into battle). But, will a piece of software that is self-aware ever be designed? Ray Kurzweil of MIT seems to think so, but really no one knows for sure. Kurzweil doesn’t like to admit it, but it’s possible that no matter how much computing power we throw at the problem true consciousness may never occur. But, if that turns out to be the case you can bet computer scientists will cook up something darn close to the real thing. One good idea is the Cyc Knowledge Server.
Being sentient is one thing that seperates us from the animals. And as for your second question, that’s what I’m asking. If someday it will be possible to create artifiical inteligence, and I’m assuming that scientists will want their creation to be sentient as well, then if a way is found to create sentience in a computer, then could it be introduced into animals as well?
Well, when I talk artificial intelligence, I’m talking about the type where sentience is implied because that’s what scientists are ultimately shooting for. I mean, something like maybe a gorilla who can do sign language would be a good start, but ultimately scientists want an artificial human. Of course whether or not we’ll really be able to achieve that is up in the air, but I’m just curious if working one something that already has a mind, and just introducing a new attribute would be any harder than creating a whole virtual mind altogether.
Without getting into the debate about “what is sentience” (which has been hashed to death in Great Debates on various occasions), the primary objection to augmenting the intelligence of an existing creature is that we have very little understanding of how the brain processes and stores information, and therefore we have very little understanding of how we could augment this process. Even if we were possessed of this information, the structure of the brain being what it is (living cells), we would have trouble interfacing with it and developing mechanisms that code and process information in a compatible way.
What makes implementing AI on computers so attractive is that we have a very good model of how computers work (after all, we built them), we can modify and augment existing computer processors, and we can examine and change the contents of a computer’s memory at will.
While scientists are working on trying to interface nervous tissue and electronics, their efforts are handicapped by the (understandable) restrictions on working on human subjects and the difficulty of interpreting the results of animal experiments. Are these difficulties insurmountable? Probably not. But we’re further along in developing purely computer-based intelligences.
I think y’all are confusing sentience with sapience.
Sentience means simply capable of feeling. Sense, rather than sensibilty. All animals have it by nature.
Wasn’t Alan Turing’s test the idea that if you can’t tell the responses you get from the animal/computer from those of a human, then it has true AI?
That reply actually answers my question. It would be harder than creating a complete virtual mind. Thanks.