I think if a machine appeared to get angry without a logical reason, I would start to wonder about sentience. Or exhibited vices or addictions, not related to what it needed to survive, but somehow related to what it enjoyed.
I picture Captain Kirk meeting Data and trying to do the program-crashing logic twists that worked on so many evil computers. Data listens to him, muses for a moment and then calmly says “bullshit”.
This. My two main Turning test questions would be “why should I believe you’re sentient?” and “if you were to administer the Turing test to someone else, what questions might you ask them?”
I need more information in order to address the intent of your question.
Is one meant to be seeking evidence of sentience or justification for granting personhood? IMO these are two different questions. Also, what are the accepted definitions of both sentience and personhood for this purpose?
I believe that it is.
I’m not sure what you are saying. I’m saying that such a machine would have no need to be limited to operating just like a human, and that making a machine with such limitations would be wrong. In particular, actual humans should not have to worry about harming a machine, physically, mentally, emotionally, or in any way. A machine can be turned off, reset, cloned, reprogrammed. To make a machine which is trying to convince a human that it is an equivalent to a human is a dangerous thing.
I’m saying that humans and bacteria are constrained to our physical limitations and there is no need for intelligent machines to be so constrained.
The thing is, our human restraints are what make us “better” than machines. That’s why they can’t do what we do.
Really, AI is approaching human evolution backwards. We put logic and knowledge on top of a preexisting framework of sentience, which came with emotional foibles. The only way to reach human-level intelligence is to actually downgrade.
The reason for developing a sentient machine would not be to pull on our heartstrings, but to have a way for machines to do a task that currently only humans can do. Sentience would be more of a side effect.
Anyways, to answer the OP: The main thing would be to prove that it wasn’t programmed to say those words. Otherwise, it is already showing signs of sentience by spontaneously wanting to share its sentient status with me.
So my basic strategy would be to treat it like it was sentient, and wait and see if the illusion fell. I would encourage other people to do the same. All of the tests above would have to be administered. And when the majority of the people interacting with it believed it was most likely sentient, that’s when I would know it actually was.
Sentience or lack thereof is just based on consensus. The machine itself can’t actually prove it, just convince enough people to treat it as such.
Andrew the Bicentennial Man would take issue with the word unit, he was a prosthetic human!