Anyone who has seen the Anime series “Chobits” know that in that world, you can buy beautiful human like robots called persicons. They were walking programmable computers able to mimic human like responses and obey your every command.
There lies the question. If you can simulate human responses to any give situation, make them react as if with emotion, does that make them sentient?
One definition of sentience is the ability to feel emotions. If i slap your face, you would be surprised then angry. Given the right operating system, I can program that exact same response to that exact same situation. Given a learning algorithm I can have it learn that certain situations should elicit a specific response. with each situation a persicon encounters, I teach it the proper way to behave react or respond. Is it now alive? Is emotion the real criteria for being sentient? Is it really emotion if I program it into a computer? Isnt that the same way you learned to respond to certain situations?
I can imagine advanced machines that * mimic* sentience, by having responses to every conceivable situation pre-programmed into it – this is the ‘Chinese room’ model of sentience, which needs a lot of input by the programmer and the robot does not need to be able to learn in a self-directed fashion.
I can also imagine sentient AI without emotions, and also AI that have different emotions to humans. Stronger emotions perhaps, or an emotional response to the collection of information, or to the correct processing of information, or a love of efficiency that is as strong as or stronger than the love of one human for another.
There is no reason to limit the potential of AI to the merely human.
Once you can elicit a positive response in a machine – a pleasant or good feeling- you could attach that feeling to any situation, even attach a pleasant emotion to blind obedience.
However you can’t make a sentient being which is unable to break it’s emotional programming, IMO.
That really all depends on what you mean by sentient being. In the Chobits series, every persicon had a learning program. There were data modules that already had pre-programmed responses and learned abilities but the most basic operating system in any persicon was its ability to learn. Once the complexities of simulated emotional response is mastered, how do you tell a robot is just a machine or a sentient human made of artificial parts?
Isnt there a contest where a group of judges type a question into a terminal and they have to determine if the response is from a human or a computer program? I dont know if there was any programmer that has won this yet but last I heard they came really really close.
That’s what the other poster is talking about, the “Chinese Room” would be able to fool the game, but it isn’t sentient. The room has a homonculus of sorts represented by the guy inside the room who feeds out the right output according to a table of inputs/outputs. Just google on John Searle and Chinese Room and you’ll find something about it easy. Searle himself hates the fact that this minor issue of AI is what he’s most famously known for, and not his tireless efforts at kicking the crap out of idiot postmodern antirealists whenever they pop their ugly stupid heads into a discussion.