The humanity of a Turing machine

Well, I’d disclaim that it was appropriate in response to my post. And…

In terms of the OP’s question, it most certainly is relevant for some classes of argument – namely, those that assert that humans are more than just “biological machines” (most often, goddidit). If one is to assert that, then your example can be used in support of that assertion – even if they cannot do so for all possible “programs”, humans are able to identify (certain) programs as undecidable that machines cannot, ergo machines are not human and thus lack “humanity”.

Now, I believe that humans are biological machines, there’s no “extra” non-biolgical component, and so that reasoning carries no weight with me. And I agree that the halting problem is very much a digression from the discussion in this thread. However, I feel obligated to point out again that bringing it up in response to my post (“a machine can (theoretically) be designed with that ‘awareness’”, where “awareness” was used to indicate complete access to and modification of internal state and operation) was perfectly cogent.

I think you mean “non-halting” rather than “undecidable”? (What would it mean for a program to be undecidable?)

Anyway, there are no specific programs which machines cannot identify as non-halting; after all, for any finite set of inputs, there is some machine which gives the correct response to all of them. So the idea that humans are superior to machines because of specific programs humans can identify as non-halting which the latter cannot is ill-founded. Of course, if you pick a specific human, and a specific machine, either one may beat the other on a specific input.

Conceivably, there could be some particular infinite class of programs which were all properly identified as non-halting by some particular human, even while no particular machine could properly identify all the members of that infinite class. That would actually demonstrate a difference between human and machine capability. But I cannot think of any examples of this sort of thing; indeed, with confidence, I say no one has any examples of this sort of thing. It is just as plausible that tea leaves, when subjected to the appropriate procedure, will correctly identify the halting status of some infinite class of human-procedures, even while no particular human could do so. All these are are mere possibilities with no evidence.

As for the relevance of the Halting Problem to self-awareness in the sense of “complete access to and modification of internal state and operation”, I hardly think it falls under that aegis. The Halting Problem isn’t a question about “What am I doing right now?”. One may be perfectly ‘aware’ of one’s current state, while still being unable to answer the question “Does the procedure I am currently stepping through eventually reach one of its termination steps?”. One thinks of the man who sits down and decides to manually scan through the digits of π till he possibly finds his social security number in there [though, of course, it may not actually be in there at all]. He knows exactly what he’s doing; he can be perfectly aware of all his “internal state and operation” as he does so. But, all the same, he doesn’t know from the get-go what the eventual outcome will be; he has the self-awareness but still doesn’t know if he’ll halt.

Unless there’s a terminological convention of which I’m not aware, for the purposes here, the terms are interchangeable. I suppose one might require more precision for expansive discussion; for instance, to differentiate between possible results of a natural language parser processing an ambiguous statement – being ambiguous, the semantics (and thus the parse) are undecidable, but the program may halt when such ambiguity is identified. I don’t think that level of precision is necessary here, but am willing to go along with it if you insist.

Yes, this was my point in mentioning “clever (or not so clever) hacks”. Since I believe that humans really are simply “biological machines” (with some very clever hacks) and do not have a so-called “non-biological component”, I have no problem agreeing that the argument is ill-founded, at least in practice. That doesn’t change its potential argumentative use in principle, though, as you admit:

Hopefully, to close out this digression (it’s not much fun to repeatedly have to justify why I was wrong originally ;)):

In this context, of course the halting problem concerns “right now”, although I’ll grant that the halting problem is not only about that.

Let’s step back for a moment – in my response to Chronos, I said, “Not that I expect it would make a difference for a machine of the caliber required for this thread,” which was an attempt to make explicit my thought that such a machine would require the ability to process arbitrary input. So, from this standpoint, I have to concede his point on in principle grounds. (Perhaps you’d argue the need for such a requirement; I’ll ask you to assume it and not pursue that, as it was part of my original statement’s intent.)

But that doesn’t address your “right now” point, which I’ll get to. My clarification of “awareness” is, as you quote, “complete access to and modification of internal state and operation”. Assume that the “state” under examination is a decision process itself: there’s an evaluation program making a decision about selecting a decision process. In the abstract, this evaluation procedure may very well be structured as a recursive stack of decision programs. I see no reason to impose a theoretical limit on the depth of recursion (although perhaps an argument along the lines of Smith’s “infinite tower” 3-LISP implementation would apply, but I’ll admit my recall is sketchy (at best) and that I’d have to revisit it to work it out). At any given time, the program on the top level of the stack is being executed; we’re concerned with “right now” because the machine is potentially attempting to resolve an infinite number of programs, an attempt that will never complete.

Furthermore, I see no reason why one would assert that it is impossible for any one of the decision processes in the recursive stack to be individually undecidable. In which case, “right now”, the machine is attempting to modify/update its internal state (choosing a decision program) on which its continued operation depends (the actual decision). Until such output is generated, the machine is in a state of “undecidedness”; it cannot proceed, as it does not halt, due to its inability to complete one step of an attempt to modify its (current) state.

So, yes, I still think Chronos was correct, while also holding the opinion that we’re pretty far afield of the OP’s intent.

I think maybe it is just a terminological convention of which you’re not aware. Problems can be undecidable, meaning there’s no algorithm solving them; for example, the Halting Problem itself is undecidable, as is, say, the problem of determining whether an input polynomial equation has a solution in integers. But one doesn’t speak of the decidability of a program, any more than one speaks of problems halting.

Heh, fair enough:

If a process being “individually undecidable” means it doesn’t halt, then, sure, this can happen. If a process being “individually undecidable” means no algorithm correctly determines whether or not that process halts, then, of course that can’t happen: some algorithms will say that process halts, and some will say it doesn’t, and at least one will get it right.

Huh? I guess you’re saying the machine freezes up because it was unable to come to an answer on a particular question (“Does this input describe a halting process?”). That this has anything to do with self-awareness strikes me as an odd position to take.

Solving the Halting Problem comprises knowing the answer to questions such as “Does 239857125112354 ever appear in the digits of π?”, “Are there any counterexamples to the Goldbach conjecture?”, “Do odd perfect numbers exist?”, etc. It seems odd to me that one’s self-awareness [in the sense you state of awareness of internal state, reasonably construed] should be contingent on being able to answer questions such as these; like I said, even if I sit down and start manually listing through numbers in an attempt to find an odd perfect one, I hardly see how an awareness of my internal state as I do so should compel me to know if such a number exists for me to eventually find. I am fully aware of what I am doing: I am searching for odd perfect numbers. I just happen to be unaware of whether the search will be successful.

Yes, I agree (indeed, that was my point at the beginning). Sorry for the hijack. :slight_smile:

So, does lack of free will in your mind include or exclude actions arising from random events which cannot be predicted? As I asked before, is it not free will only if ones actions are determined in advance, or only if the reasons are determinable post hoc?

It’s not free will in any case. I don’t consider “free will” to be a concept that’s been defined in a logically possible way; everything is either random, or determined, or a mix, but not ‘free’ in some unimaginable fashion that transcends those. ‘Free will’, IMHO, is just the name we give to our own inability to perceive our own decision making processes.

I think he was referring to the Ender’s Game series’s hive queens and their workers that were (more or less) just separate bodily extensions of themselves, like our individual fingers, rather than Lanik Mueller’s regenerations in [A Planet Called] Treason.

If we can can observe and interpret the state of the neurons at the instant before the decision is carried out, why couldn’t we similarly go back a few nanoseconds earlier and observe and interpret the state of the neurons then, too? And another few nanoseconds earlier as well, and another few nanoseconds before that…

The brain state is not something that magically springs into existence just before a decision; it’s a continuous thing, shifting from one form to another based on its prior state, and possibly also on random factors that emerge regardless of the prior state. Each state flows out of the prior state in a continuous stream.What you decide now is based on what you are now, statewise, and that’s based on what you were yesterday and the day before and the day before that, and how that state changed due to internal processing and external influences (picked up via your senses).

You seem to be assuming the existence of another factor that occasionally ‘dips in’ to the brain and ‘makes the decisions’, presumably by altering the brain electrochemical state or manipulating seemingly random factors. Presumably you’re hinting at a ‘soul’ or some other such poppycock - but it doesn’t matter for this discussion, because whatever is supposedly sticking its oar in, has its own decision-making processes, which themselves are either determined by their prior state, or random.

In discussions about free will, I’ve seen people who think that positing a ‘soul’ gives them a magical avenue by which some magical substance ‘freeness’ can be pumped into the brain. But it doesn’t work that way. In reality they’re just positing another layer that asks the exact same questions as the brain - how much is determined by the prior brain/soul state, and how much is purely random?

Myself, I save time and assume that when in duscissions like this, when we’re talking about the human mind, we’re talking about the whole human mind - and if there’s some little man with an etherial steering wheel stuck in the back of your crainium doing your thinking for you, that includes his mind. The discussion isn’t really tied to brain chemistry or physicality in any way, so it seems best not to bother with such trivialities as where the thought is taking place, right?

Which means that, soul or no, if actions and state can be determined post-hoc, then there is no randomity. For some definitions of free will that would demonstrate that there is no free will - and for some it wouldn’t. It is not a definition of free will itself, at least not any that I ever heard of.

It does by every standard definition of free will I know of. What’s your definition?

I reject it because it’s clear that we are at least somewhat subject to the dictates of our mood, memories, knowledge, inclinations, and whims. Not because humans aren’t “totally free”. What is your definition of ‘totally free’, by the way? Or even just ‘free’, while we’re at it?

Firstly, I don’t give one hairy care whether the determining factor is non-physical - what does the physicality of something have to do with ‘freeness’? If you have a homunculus sitting in your brain, then his thought processes are either purely determined by his prior thought processes, or his thought processes include purely random elements. If they do include randomity, then randomity informs your “regular” brain’s processess. If they don’t, then your “regular” brain might be fully deterministic - or not. Either way, your behavior is driven by some mix of complete determinism, and complete randomity, and nothing else. If ‘freeness’ is neither of the above, then consider it proven that you have no free will.

At least, for any of the common definitions of free will that I’m aware of. I’d be willing to entertain another - that you only have free will if some of your cognition during any short period is not entirely dictated by outside forces. Of course, this would depend on the definition of ‘outside forces’. If we are purely physical, then the answer is yes, we have free will; our brains and bodies are our self, and our brains act on internal state quite a lot, as opposed to being entirely slaved to the momentary inputs of our senses. If we are being controlled by some non-physical ‘soul’, then the questions is 1) is that soul an ‘outside force’? and 2) if not, does it have free will, or is it controlled? (Note that this definition has nothing to do with either randomity or predictability - it’s a non-standard one, alright!)
(Oh, and bingo, Chorpler. That seemed like a direct analogue of a robot that made numerous bodies controlled by a single mind.)

Now that’s very reasonable. I for one have no idea whether a certain process of thought and decision making count as free will or not.

Has anyone noticed that it is only earthlings who are concerned with free will?

I agree that we might be able to model the brain as a set of states and state transitions, but some of the state transitions might not be determined by the current state and the inputs, like in a FSM, but occasionally by some random factor. In that case you cannot go backwards so easily. In an FSM, you also can’t if one state has two or more possible predecessor states. Ever try to figure out why a wrong value got into a flip-flop by going backwards through gates? Impossible, unless you’ve recorded the values in the cone and the previous time frame.

Not me - but I’m not sure how true free will works without this kind of thing. It’s a good argument against, actually.

You might need a very little push to make the brain have free will. One problem with bodily intelligence is that you can trace back thoughts to physical processes, which, except for chaos and randomness, are determined. A “spirit” may not have this problem, but you’re right in that it by no means is free by definition. A bigger problem for souls is that we have no evidence that there is any such thing.

To some extent, it seems to me that proponents of free will assume the little man driving. Where thought happens is immaterial. And not only is the whole mind involved, but the whole body too, since our brain often obeys the needs of our bodies.

Like I said, I don’t have one. The concept is seeming more and more incoherent the more I read and the more I think about it. I think that neither I nor anyone else will be able to predict what I do next. I don’t really care if someone calls that free will or not.

Because non-physical things may not have their thoughts determined by brain chemistry or physics. If free will means non-determinable, post hoc, proponents of this might need spirits.

Right, it is homonuclei all the way down. I sensed that those positing free will must have something with will, and I modeled something as the homonucleus. There are all sorts of games which are a mix of chance and determinism, but they don’t have free will.

So, if we are mostly controlled by outside forces, but every so often throw mental dice and do something crazy, is that free will? The soul is probably not considered outside but inside.

If I am both disagreeing and agreeing with everyone, it is because I find this subject incoherent. Which is probably my free or determined will avoided it until now.

This whole free will thing seems like a semantic or sophistic thing to me. It might even rise to the level of a straw man. For the purposes of physics, yes, our decision making is ultimately deterministic. But on a personal level, we all have the ability to make choices. Free will is not the opposite of determinism. If your definition of free will contradicts determinism, than it isn’t the correct definition. It’s like saying, you can’t really “see” that tree over there. You can merely experience the brain imaging pattern developed by processing nerve signals triggered by photons bouncing off the tree and hitting your retina. This is hardly a metaphysical distinction.

You mean we think we have the ability to make choices. It has already been demonstrated that our body get ready to raise a hand before the conscious mind makes the “decision” to do so. We can pretend to vote, but the invisible powers that be have decided the outcome already.

I’ve never been sure why those results are supposed to show we don’t have free will. Our awareness of our decision making is not the same thing as our decision making itself. Those experimental results show that our awareness of our own decision making is delayed. This does not in and of itself mean that the decision making itself can not be free (in whatever sense you like).

-FrL-

That’s true. However, feeling that we consciously make choices doesn’t mean that we actually are making choices consciously.

We clearly make choices at below the conscious level. My dog certainly does. When he was younger and we went for walks, he had definite preferences of where he wanted to go. If I didn’t want to take him there, he would try to head down every possible route to it. Clearly not conscious, but clearly a choice he would have made if he wasn’t on a leash.

Which brings up the question of whether animals have free will. I think if I do he does and vice versa.

Well, whenever this sort of dicussion comes up, the examiner coming is pretty much consistently assumed to have inherent knowledge of all past events, since all such information is at least theoretically available. (And because usually the observer under discussion is God.)

So, using this presumption, even the random events are available for scrutiny, because they are ‘part of the record’ - the past is past and thus, knowable.

That in mind you can see why I don’t find much to work with in this ‘post-hoc’ stuff. It’s a discussion of an examiner who happens to have incomplete past knowlede, which has little to do with whether the past events are knowable.

The thing to remember here is that the division of the mind into [factors determined by prior mental state and prior external influences]/[random and undetermined factors] is not a discussion about physical brain processes. It’s a purely abstract analytical argument, which would apply to any thinking or analyzing entity. Including any spiritual ones. Any ‘little man driving’ is itself driven by fixed and/or random pressures, and nothing else.

So, positing extra nonphysical entities gives you nothing, because physicality wasn’t part of the fixed+random argument to start with.

“Free will” in discussions like this is often quite incoherent, largely because there’s not a good single defintion of the term that everyone intuitively understands. So sometimes it’s like the goalposts start out on wheels.

As noted, this ‘post-hoc’ business is inapplicable to the discussion; the examiner is presumed to have knowledge of any spirits’ prior mental states as well.

Well, it depends on how you define “free will”. :smiley: Clearly, the determined/nondetermined dichotomy doesn’t allow for an entity to have will, if will is defined as “something that is neither determined nor nondetermined”, which is what it usually comes down to in discussions like this, because people seem to tend not to like thinking of themselves as deterministic, but breaking that up with nothing but pure mindless randomity doesn’t make them feel very good either.

By this extremely nonstandard definition, the question is whether I’m a puppet or not. If the theorized “soul” is “inside”, then it need not even be mentioned. Similarly, if the determining moods and inclinations, and the random factors, are all internal to my ‘self’, then they don’t need to be specifically mentioned either. But if I’m controlled by some god or something with some other will that I would not define as “mine”, then I would not have free will. So, for an example, characters in computer games who are avatars of and controlled by the player would have no free will, because they are completely controlled by the player and they don’t have anything resembling a true mind of their own that has the thoughts they purport to have; whereas the bad guys in a computer game perhaps could be described as having free will, because whatever tiny little set of rules and variables controls them could somewhat accurately be described as being their tiny little mind itself.

Like I said, a very non-standard defintion.

Well, it’s all about the definitions of the terms. Until you nail those down it’s like trying to ice-skating on an unfrozen pond - it takes ‘slippery’ to a whole new level.

Eh, I put the whole brain into my concept of “I”, not just the final twitter of awareness. But anyway I think where people get caught up is in misconstruing “free will” as being the opposite of determinism. “Free will” doesn’t mean “a superpower to defy the laws of physics” it just means “the will of this body is not controlled by the will of an external body”. It says nothing about the nature of “will” itself.

First of all, I’m assuming that we can know previous states in a post hoc determination of the causes of an action. What is more interesting is whether we can even theoretically predict an action based on knowledge of a present state.

If we’re puppets, the answer is yes, since we can ask the puppeteer. But that’s not the only possibility. Consider an omniscient god, omniscient in knowing the future. This god may not control us directly. However we can ask this god what the future action of a person is. If God is truly omniscient, and is always right, does the person have free will? He may think he is making choices, but they are constrained by the hidden variable of God’s pre-knowledge of what they are.

The reason for the spiritual/physical dichotomy is that in a purely Newtonian universe we should be able to predict someone’s actions if his brain is purely physical. (And we know the inputs, in other word the environment.) Does this person have free will, given that his actions are constrained?

Now in a non-Newtonian universe, we cannot predict his actions because they will be affected by truly random, and unpredictable, events. In the first case the person thinks he is making a decision, but clearly isn’t, since his decision is predetermined. In the second his action is not predetermined, but how is his decision process any freer than in the first case?

Now, if there is a soul that can make decisions, the soul would be immune from these possibly deterministic physical processes. If we knew anything about the soul, like that it exists, we can consider this case further. I brought it up since it gives the only escape hatch for truly free will. If there is an omniscient god, the soul has the same problem as the human above.

So, we have four cases:
[ol]
[li]Puppet master. No free will.[/li][li]Physically determinable actions. No free will.[/li][li]Chosen actions known in advance by God. Free will or not?[/li][li]Actions not determinable in advance, and not deterministic. This seems to me to be the actual case, Free will or not?[/li][/ol]

That’s my case 1 above. I think we’re all pretty clear that this isn’t the actual case. But look at the other three cases, none of which involve direct control.

in case 3, if God can predict actions and decisions in advance with omniscient accuracy and certainty, that merely means that the all actions and decisions are determined in advance. All actions and decisions - physical, spiritual, those actions based in the thoughts of ketchup - all actions and decisions are predetermined. Functionally speaking with regard to free will this is equivalent to case 2, which you have declared not to include free will. So the answer would be the same for case 3. (For that definition of free will, anyway.)

Case 4, as you have actually written it in your list, makes no mention of ‘spirits’ (or ketchup), which is good, because as I have stated in the section of my post you quoted none of that spirit stuff makes any difference anyway. (Nor would ketchup stuff.) Which means that case 4 is just the case where the universe includes random elements, making it impossible to determine its successor state from a prior state. (Without random elements, this is possible).

Whether the unpredictability introduced by adding random elements to the universe counts as free will or not would depend on your definition of free will. Der Trihs would say no, because randomity does not relate to free will as it is understood (he doesn’t think the concept is even coherent). I would say it doesn’t matter because free will as it is commonly understood is not incompatible with determinism, and so can theoretically exist in any of the cases you listed except the puppetmaster scenario. Many people would say yes, the randomity matters, because they’re just defining “free will” to equal “unpredictable”.

Which definition would you prefer we use?