You meet a machine that claims to be sentient. What does it take to convince you that is so?

While most humans can’t manage this, if a machine picks up on my sarcasm, I would be convinced.

I don’t care what it looks like. It could be a cube full of circuitry and be sentient, or could look and feel like humans and fail to be more than a machine.

I don’t have a default attitude/position of “machines can’t be sentient” so it’s hard to list any “test” one way or the other.

If it starts teasing me about the whole issue of sentience, that’s a good start.

I would remove it’s energy supply ( AKA electricity ) and invite it to replace it.

I’m not sure I have a point in tying them together, but if there’s no free will then it’s close to saying we’re robots. And if we’re “robots” are we actually self-aware and sentient?

This stuff makes my head hurt, and in a effort to stop the hurting I looked up “sentient” in the One Look Dictionary. The quick definitions were interesting –

capable of feeling things through physical senses
more…
Provided by Macmillan Dictionary

and

Quick definitions from WordNet
▸ adjective: endowed with feeling and unstructured consciousness
Wordnet

The MacMillan one is close to my initial notion of being able to feel pain (which I can’t really imagine a machine doing.) Presumably a sentient creature would also feel pleasure, but suffering is more “humanizing” than self-gratification.

Transforming into a truck would be a good start.

How so? The series tagline is “Transformers: Robots in Disguise.” That just leaves us where we started.

I picked “[any of them] could convince me if <blank>” where <blank> = have at least 3 sequential conversations with me that convince me it’s not just a chatbot, bearing in mind that I’m familiar with common chatbot techniques. I’d expect evidence of a degree of learning both between and within sessions, as well as some sort of empathetic response - not necessarily identical to human baseline - I’m happy with, say, doglike empathy (catlike is obviously right out.)

So tell me about your mother.

I wrote a novel many, many years ago (in which I invented the iPhone and it was amazing for sporting 4096 different colors!) about a sentient machine, and the problem with sentience was that it also fostered an unfortunate (for us) sense of self preservation (for it).

To convince me that it (the machine) was a person, I think I’d have to see some instinct for self preservation.

Of course, dogs are pretty smart, and have an non-intellectual instinct for self preservation, and although I love dogs and recognize that they have personality, I’m not one of those wackos that would extend personhood to dogs.

So that machine would also have to be able to communicate effectively. Dogs are extremely limited in their ability to communicate in a mutual manner with humans, and so machine communication would have to be much more expressive.

Creativity may be the key. I’m not sure that dogs are creative. They’re observant and able to learn, but I’m not certain that they’re creative or are able to innovate on their own.

Heck, suppose we’re all a computer simulation? Are we persons according to the programmer?

What’s your basis for this statement?

I’m baffled by the choices of different types of “unit.” Who thinks shape is relevant here?

The machine has to be able to figure out how to convince me or it isn’t sentient.

Because dogs won’t intentionally commit suicide. They may not know that not all cars will stop from running them over and die, but they don’t get killed intentionally. They don’t randomly fall off cliffs. They don’t meekly submit when attacked.

Are you allowing non-physical representations of sentient beings such as holograms? I propose that self-aware holograms. such as the EMH from Voyager, Vic Fontaine from DS9, and Moriarty from those STTNG episodes where Data plays Sherlock Holmes in the holodeck could pass themselves off as sentient. Voyager’s Doctor can even leave the Sickbay, thanks to his portable emitter and wander around freely as he likes. I could easily accept any of these entities as sentient, particularly the EMH.

Mike almost certainly displayed that he was capable of feelings by having a personal interest in seeing the rebellion succeed. His character of Adam Salene was as vested in the physical reality of the colony as any true man. He displayed original, creative thought for his own rational ends, as well as those of his friends. He sacrificed himself for the greater good. He was a loyal patriot and a true friend.

sniff. I can’t go on.

Would you be OK with the machine asking you to submit to the same test?

I’m just reminded of the golem Dorfl in Pratchett’s Feet of Clay when confronted by the priests who are disputing that he is alive. Dorfl agrees that were they to grind him up until all that remained was ceramic dust, and then sift through that dust they would not find a single iota of life… and was willing to submit to this test… provided they also would submit to it.

“Is it me, or are we on shaky theological ground here?” :smiley:
The self-preservation concept is a good one. I’d expand that into intellectual/emotional self-preservation/indulgence as well. Can the machine recognise certain activities and experiences as enjoyable or fun, and others as unpleasant, boring, or painful? If it’s able to form and express preferences like that in a plausible way, I’d be willing to ascribe sentience/sapience/whatever until proven otherwise.

Tell it, “Everything I say is a lie.” If its head explodes in a cloud of smoke and sparks, it was a robot.

“Simple” Turing test; if I can have a meaningful series of conversations about a wide range of topics with any one of the proposed subjects, and I cannot distinguish their part of the conversations from - say - a human “speaking through them” remotely, they are sentient. And I think the Turing test is somewhat stricter than it needs to be in this case, since the test assumes a human-equivalent level of intelligence and experience, but it certainly would be sufficient.

So? The question is not if it’s human.

How would you know? I’m quite certain that if we ever build a machine complicated enough to do all that, we wouldn’t be able to say conclusively that it’s limited to just “simulation” of whatever we “programmed” into it.

I don’t follow you. Bacteria are more like machines than we are, but they also seem to be more constrained than us, at least as far as intention is concerned.