The humanity of a Turing machine

Over in this train wreck of a thread, I said something that I think could break out into its own discussion.

Der Trihs states outright that free will is an illusion, “the concept is incoherent”.

If all behavior is deterministic, albeit very complicated, then I’ll argue we are all just complicated, analog computers. Just complex state machines. I think Der Trihs would agree:

So…

Imagine now a Turing machine - a quantum computer designed in nebulous future that passes every test. It reacts just like a human would, it gets offended, it insists it’s alive, it has an opinion about God. All that.

Given this humanist view that we are all just biological computers without a soul or true free will, then I’d argue that this view says that humanity is defined by acting human. There’s no soul or anything magical that elevates “human” above the level of behavior, the level of action and reaction.

If this Turing machine acts human, is it then, in fact, human? If not, how is being human different from this proposed machine.

We can assume that this machine has a physical, moving component - it’s fully capable of replicating itself. Let’s not let a simple thing like reproduction or mere physicality be the defining difference.

As a veteran of one of those recurrent “free will” debate threads, I’m under the impression that no matter how new the thread, one enters the debate on page 73206, with the other participants already deeply mired in a network of terms and definitions that are not overtly identified.

(Your logic sounds reasonable, by the way)

To me, the statement I am a person with free will has more than one term worthy of intellectual dissection, and focusing on “free will” is probably a mistake when the real argument that the opponents of free will (in my opinion) is that there is no such thing as “I”.

Which means that they have a notion of what “I” would mean if there was, in fact, an “I” who could conceivably possess free will if “I” existed.
Can free will exist and yet not be possessed by anything quite resembling what we conventionally think of as the unitary individual self?

Is consciousness and volition a possibility even if “the individual self” is a local and illusory manifestation of long-wave phenomena like culture and even the mechanistic unfolding of inertial physics?

We’re assuming this machine has a physical body, with DNA, is a mammal, etc? Yeah, I’d say it’s human then. You can only define things by their characteristics, and if they are identical to a human’s, the it is human.

Nope, I"m suggesting it’s silicon-based.

So, to you, then humanity is about the machinery, not the behavior?

Would this machine be allowed to have “human rights”?

Ooooo - shades of ST:TNG The Measure of a Man.Data is intrigued until he discovers that it is Maddox’s intention to download Data’s memories into another computer, deactivate him, and then disassemble him. Data points out that Maddox doesn’t have the necessary knowledge to carry out this procedure safely, and so he refuses to undergo it.

Maddox then issues an order backed up by Starfleet command for Data to submit himself to disassembly. Picard refuses to allow Data to go along with the order and Data concludes that only his resignation will allow him to circumvent the order. Maddox, however, contends that Data cannot resign as he is the property of Starfleet, not a sentient being with rights.

I’m suggesting that there are characteristics beyond how a thing acts that can be used to define it. I would probably say that behavior is more important of the two and that such a being should be given the same rights as a natural human.

Belrix,
Is humanity really the word that describes what you’re after here? The definition requires more than behavior.Humanity is the human species, human nature (e.g.compassion, altruism) AND the human condition (the totality of experience of existing as a human).
[Emphasise mine]

The human species as a machine is part of that definition. A machine of another type make the use of humanity unavailable.Human beings, or humans, Homo sapiens sapiens (Homo sapiens — Latin: “wise human” or “knowing human”), are bipedal primates in the family Hominidae. mtDNA evidence indicates that modern humans originated in Africa about 200,000 years ago while nDNA indicates about 1 Mya. Humans have a highly developed brain, capable of abstract reasoning, language, introspection, and emotion. This mental capability, combined with an erect body carriage that frees the forelimbs (arms) for manipulating objects, has allowed humans to make far greater use of tools than any other species.

Maybe I don’t understand what it is your aiming for but I suspect there’s a better measure for it than humanity.

PC

I don’t think humanity is defined as “acting human.” At least in the language I speak, there could be something which acts just like a human in every way, yet which has no consciousness. (Note: I’m not claiming this is a metaphysical possibility, just that it’s a linguistic possibility. My only point is that the definition of humanity, in my language and I suspect everyone else’s, goes beyond behavior and involves something a bit less effable involving first-personally lived experience.)

On the other hand, there’s the separate question of how to know whether a given object is human or not. If something acts in a way completely indiscernible from the way a human acts, then even if we think it’s (at least conceptually) possible for something to be like this yet not be human, how could we tell the difference? By the terms of the scenario, we couldn’t–it acts exactly like a human, so there’s no way to tell that it’s different than human if indeed it is different than human.

It’s a classic “will to believe” scenario. It’s an important decision, one which can’t be avoided (or anyway, will be unavoidable at some point in the future) and it appears that no reasoning, a priori or empirically based, can decide the thing for us. So, it seems, we just have to pick something to believe here. (Is it possible to “just pick something to believe” though?)

So I advocate (I “choose to believe” I guess) the “benefit of the doubt” approach. If it acts human, treat it as human, just to play it safe. (We don’t want to go around accidentally treating genuine humans as though they were non-humans, after all.)

Note: I’m using “human” in the way the OP seems to use it, to mean what I would normally call “sentient.”

-FrL-

It might be a person, but it’s not a human. Homo sapiens and all that.

An operating systems professor of mine once made the comment, “There are no problems in the field of computing that aren’t amenable to another level of indirection.” To avoid overexplaining, consider that to mean replacing programmatic structure with (changeable) data. A simple example to get the point across: static webpages vs. those generated from data stored in a database. Bear with me here; hopefully the relevance will become clear shortly…

So, one difference between humans and the (theoretical) Turing machine of which you speak is, as Der Trihs puts it: humans have almost no awareness of the workings of [their] own brain, while a machine can (theoretically) be designed with that “awareness”. In other words, we can’t know why we make the decisions that we do, as that level of introspection is closed to us. Not so for a machine; the decision process itself might be an object of study, evaluation, and even possible replacement. In some sense, that would allow such an AI to be “super” human, as no part of its operation need be opaque to itself.

I have difficulty wrapping my mind around this; even given the above, a state machine must be in operation, and so it’s “turtles all the way down”. Intriguing as it may be, I don’t really see the idea changing the “free will” debate. And I further expect that “free will” will constistently fail as a measure for determining “humanity”, even if one could define it adequately. The same goes for “self-awareness”, “consciousness”, “sentience”, and a host of other terms.

Instead, there are other aspects of being human implied by the term “humanity”, not the least of which include emotion, culture, and morphology. ISTM that one’s consideration of a machine as “human-level” depends greatly on how much credence one gives to those aspects.

At its root, (non-)acceptance comes down to one’s formulation of “other”; as history shows, people are all too ready to classify some individual (or even group) of “other” as “inhuman”. For the most part, a functionalist (or behavioralist) definition seems most appropriate to me, so yes, I think that such a machine should be considered “human” for most purposes implied by the term “humanity”.

Of course, the most obvious counter to that is to use a biologist’s definition of “human”: a machine cannot mate with a human, therefore they are different species, therefore the machine isn’t human. And round and round we go…establish a defining set of characteristics for “humanity” and perhaps it would be fruitful conversation. But I doubt there’ll ever be a consensus…

I also would consider it a person, but not human; the same would go for an uplifted animal or an extraterrestrial.

That’s certainly not the argument I was making. I was making the argument that the concept of free will makes no sense; that we can be deterministic, or random, or a mix, but not possessed of some “free will.”

[hijack]

Can a quantum computer be a Turing machine?

[/hijack]

FWIW, I used a quantum computer since it clearly puts this in the future. On the other hand, I think quantum computing involves multiple levels per “bit”, to to speak, more than 0 & 1, there’s more states.

This, I thought, seemed more like analogous (literally) to our own brains so it was a fun connection.

Yes, by the way, I mean “human” in the humanity/sentience meanings, not DNA/gooey fluids.

Hmm…

If you put this machine in a crusher, would you be guilty of murder?

An afterthought to my previous post: a similar issue crops up when using the term “humane”. In my mind, particularly considering history, “humane” is a grossly misdefined term. Is there a better example of the limits of inhumane treatment than the human species itself?

But we pick and choose what aspects of behavior we put under that umbrella, and as fluidly as we need at any given time.

If you want to ignore the difference between “human” and “person”, then yes a computer that exhibited the behaviors and awareness we associate with a person would in fact be a pers-a ‘human’. Heck, a lot of people think their pets are people, due to anthropomorphising their behavior. (Personally I think these anthropomorphisizations are misinterpretations of the animal’s motivations, but a computer-sentience could theoretically be able to speak and thus clarify such issues.)

Further, I don’t think that quantum computers would be necessary - normal ones would do, just really, really fast ones, which could emulate all the physical hardware complexity required. Even if that means emulating the chemistry and biology of a human body!

Under current legal definitions, I don’t think so - I think it has to be a human. I do think it would be immoral to kill such a sentience in a permanent manner, though, and in the unlikely event that we ever gave such entities rights, I imagine such rights would protect against unsolicited shutdowns.

I apologize for being such a noodge about definitions, Belrix. It’s inescapable when delving into philosophy of mind as there are so many unspoken assumptions and differing conceptions that use the same words. If by sentient* you mean “responsive to or conscious of sense impressions” then the problem really becomes what do we mean by conscious and can a machine do that.

These questions turn out to be much harder than they might first appear. What does it mean to be conscious? Are other animal machines conscious? Is being “alive” a prerequisite for consciousness?

PC
*Wikipedia’s definition is even more noodgey.

Fallacy of thefalse dilemma.

Forgot to address this…

Definitions again, sorry. Guilty of murder as defined by current law? No. Might an average human feel guilty as a result of doing so? I don’t know. I suspect I would.

PC

Everybody has their favorite definition.

Let’s put it this way, to expand my murder question…

Should a machine capable acting/reacting in a completely human manner, be given the rights & responsibilities of a biological human? Is the ability to fully emulate a biological human sufficient to give this machine the same rights that any natural-born human should expect?

Would it be legal to crush it if you were it’s builder? I can’t kill my own child, he has the right to live by virtue of his biology. Would the machine have the same rights?

As far as sentience, the machine would answer the same way a human would - of course it’s sentient. Aren’t you? Of course it’s conscious…

I say “crush” because “turn off” might be argued as being asleep or comatose.

And to this end they built themselves a stupendous super computer which was so amazingly intelligent that even before the data banks had been connected up it had started from “I think therefore I am” and got as far as the existence of rice pudding and income tax before anyone managed to turn it off.

The clarification of the unclear.

You may not be satisfied with my answers here either I’m afraid. Rights are just permissions (or lack of restrictions) that we grant to each other. In order to answer your “should” question, we can examine our reasons and motivations for granting some rights and not others but we won’t find some ultimate, objective authority for anything we come up with. We create rights and write laws to promote what we value and reduce that which we dislike. Our values are products of our genes and environment. In other words, morality is an aesthetic judgement. But this is a topic for endless debate in its own.

A machine that did not want to be destroyed would appeal to my values as something that should not be destroyed. If enough others around me agreed then we’d agree it has a right not to be destroyed. We might even write that down and call it a law.

PC