No backstory. Why? Because I can’t be arsed, however, set some parameters:
By meet, I mean interact with in person, not trade IMs with or whatever.
I will propose four types of possible machines. One is clearly a computer, ala Mycroft Holmes from The Moon is a Harsh Mistress. One is a somewhat humanoid but clearly metallic and neuter robot that always walks around unclothed, like Sonny from the I, Robot movie. One is an anatomically correct android who looks like a human in makeup, like Data from Star Trek. And the last is a humanoid android which, though inorganic in structure, is externally indistinguishable from humans, like Julianna (Data’s “mother” from that TNG episode I am too lazy to look up.
Persons feeling I should have written sapient rather than sentient in the subject like are invited to bite my shiny metal ass.
In all cases, you meet this entity and are quickly apprised of its true physical nature. In all cases, the entity claims that, despite having silicon innards, Wi-Fi connectivity, and a positronic Matrix, it, he, or she is as much a person as any human.
I’m not labeling anything or anyone sentient unless they can feel pain or panic. If I can send the machine into a Serious Tizzy by threatening to dismantle it, then it might seem sentient.
As for whether it really is self-aware and capable of independent thought, I don’t much care. I can’t accept free-will for humans, so I can’t expect it from a machine. But I’d like an equally convincing illusion of free will.
Essentially, the Turing Test. If it acts in more-or-less the same ways that a human would, demonstrates the ability to follow conversations at a normal level of complexity, and otherwise acts indistinguishably from other people whom I believe to be sentient, I’ll go ahead and assume this thing is sentient as well. Frankly, I have a hard time with the idea that sentience is anything more than a set of highly-complex and flexible behaviors in any event.
I’m convinced already, now get back to work counting widgets, calculating trajectories or doing your love robot duties as the case may be. What do you want me to do about it, introduce you to my social circle?
Heh. If a human can commit suicide by cutting off small parts of its anatomy while maintaining mental equanimity, I might label that creature insentient.
As for the tizzy – what if the machine doesn’t panic, but nevertheless strongly protests and, if mobile, attempts to escape? Why is loss of control necessary for sentience?
Hi Skald, I’m just feeling my way here, and like I said above, don’t really believe in sentience/free-will.
But even single celled life is organized around moving toward desired stimuli and retreating from the undesired stuff. If a creature displays strong and believable desires and aversions it creates a powerful illusion of sentience.
A related phenomenon is watching a doggy/kitty/child wrestling with conflicting emotions. Save the candy for your friend or eat the candy NOW! Showing internal conflict is Sentience Gold.
As for humans who don’t experience pain, that’s the flip side of Mr. Excellent’s question about humans who don’t exhibit self-preservation. If they can commit suicide by calmly gouging out their eyes and digging toward the brain, I might call them insentient.
If a machine is luke-warm about self-preservation, it’s going to seem obviously programmed rather than free-willed.
One need not believe in free will to accept sentience. I mean, it’s obvious to me that I am self-aware. Hell, I’d call that incontrovertible. The same may not be true of you and other persons, but I’m willing to concede it as the most likely case, since y’all behave in ways not unlike me. I need not believe in the traditional model of free will (nor any model) to accept that, and frankly disbelieving in the existence of my own ego seems silly.
If I know the physical nature of this entity, I know it’s not human, and not as much a person as a human.
If such a machine had been manufactured, and operated in such a way as to simulate all human emotional and rational processes, and nothing else, it would seem to suffer all the ills of humans, possibly evoking an emotional reaction in me. It would seem cruel to inflict intentional harm to the machine, but it isn’t. It’s just a machine. I don’t believe a machine should ever be made in such a way because of it’s effect on humans. A machine that can emulate a human brain should have a means of eliminating the negative effects of sentience. It could emulate those effects for the purpose of understanding humans to avoid hurting them, but it has no reason to appear to humans as if it is anything but a machine, and nobody should have worry about hurting its feelings or otherwise cause it to suffer in it’s own perception. Humans don’t have a choice in their condition, but machines would not be so constrained.
I would regard a machine as a person, and therefore sentient, if through dialogue (spoken or unspoken) the machine perceived itself as “I” and me as “you.” Otherwise, if the machine had an agenda independent of programming, I would consider it a sentient individual (not a person).