Free will

Ok then name an event not caused by prior events, yet can be controlled by the human mind.

Shakespeare’s plays were a summation of his experience and his ability to arrange themes and phrases that already existed into an aesthetically pleasing form. These themes and phrases developed over years of human evolution out of biological necessity and other predetermined events that shaped Elizabethan society. This brings us all the way to before the genesis of life on Earth, or anywhere else for that matter. Surely free will had no effect on actions before there was life, therefore all events from the Big Bang up until the beginning of life were inevitable. So have I not demonstrated that every event from the Big Bang until Shakespeare set pen to paper was inevitable and therefore predetermined?

I still don’t see the issue with: if you decide to be nice, you reap the rewards of that (including a self pat on the back), just like any other action. I don’t see why goodness needs to be a bolt out of the blue to count for something.

I should also say, I disagree with the concept of no-such-thing-as-a-selfless-act. It would be a hijack to elaborate, I’m just saying this because often on the subject of moral actions people just take it as a given that everyone knows there’s NSTAASA.

Not necessarily luck: perhaps one can view it rather in analogy to a certain skill. I’m with Mijin in that I believe that the idea of ‘could have done otherwise’ in the same situation flat-out makes no sense (and with Mangetout in believing that it’s nevertheless meaningful to talk about decisions and choices, just as it’s meaningful to talk about my arm even though it’s really just a collection of cells), but the philosopher Daniel Dennett, who says more or less the same thing, believes that’s just not the right notion of freedom to apply: rather, we should look at what kind of behaviour emerges in a collection of closely related situations. He uses the example of a golf player to illustrate this: in any given situation, the outcome of a swing is perfectly determined by the situation in which it occurs (modulo quantum effects, which we can neglect to excellent approximation).

Nevertheless, it makes sense to talk about good and bad golf players: while in any given situation, a swing will either reach its target or not, and that’s entirely determined by the relevant causal factors of the situation, across a class of similar situations, the good golf player will make the shot more often than the bad one: that’s his skill. Using a similar metric, it becomes meaningful to talk about morally good and bad persons: those that make the moral decision more often have a higher ‘moral skill’ than those killing the child in 99 out of a 100 cases. This, of course, still doesn’t give you ‘free will’ in the naive sense of ‘could have done differently’, but, if one sort of blurs different, but closely related, situations together (as they always are in practice, since we’re never aware of truly all the relevant facts of a situation, and thus, will fail to distinguish between a lot of situations that are ‘microscopically’ different), one ends up with a notion of ‘effective’ or ‘approximate’ freedom, that while not in singular situations, at least in classes thereof there is a choice in this sense. And if one wouldn’t count this as ‘genuine’ free will (few probably would), at least it provides an arguable way to rescue notions of morality, give grounds for moral blame, and provide a justification for punishment, or better yet, moral training to improve one’s moral skill.

Another way of looking at free will is that a free action is one that cannot be reduced to anything other than you performing this action—that is, each predictive model capable of exactly predicting your course of action given a certain situation will, in some sense, be equivalent to having you simply perform this action. This has its roots in a computational perspective: let’s assume a human being is, to all intents and purposes, a decision-making machine, i.e. something that takes a situation as input and outputs a certain action in response. Now, it’s an elementary theorem in computer science (Rice’s theorem) that the only way to perfectly predict what a given program is going to do given a certain input is to run the program, and see what happens (this follows from the undecidability of the halting problem: if you could, say, always determine whether a given program produces the output ‘hello world’ or not, then you could interleave this program with one of which you want to know whether it halts or not; if your ‘hello world’-checker then decides that the program actually prints out ‘hello world’ eventually, then you know that the other program must halt, and vice versa). But this then means that the only way to predict your course of action is to either have you carry out the action, or produce an exact simulation of you that carries it out. In other words, any given of your actions is determined exclusively by you performing it; there is, in general, nothing ‘less’ than the full you that determines what you do.

Nevertheless, this does not provide you with ‘could have done otherwise’-style free will, and furthermore, it has the rather strange consequence that a great many inanimate systems—anything capable of universal computation, which is a lot more things than one would generally think—possess the same kind of ‘freedom’ (however, maybe they may be said to not possess anything identifiable as ‘will’).

For another take on free will from a (quantum) computational perspective, physicist and complexity theorist (and occasional philosopher) Scott Aaronson has recently offered up his views in an interesting article I won’t presume to summarise here.

I think these kinds of things are as close as one can get within a materialist view of the world. Another possibility, of course, would be to turn rather to idealist ideas, i.e. the view that the world is ultimately spirit, or mental, not matter. Then, one could draw up an argument that all that ever happens is ‘free will’ of a sort, i.e. mind as determined by mind, since there’s nothing but mind. But I don’t think this is a very attractive perspective myself.

Agreed, and I don’t think that’s overstating things at all. Only the god hypothesis has done more harm in my view.

Yeah, that’s a good way of putting it.

I think if people saw “could have chosen differently” in those terms, they’d be less likely to confuse fatalism with determinism.

First, your question is a good one, and one that’s been pondered by many of the world’s most brilliant people, and they haven’t come to agreement on the subject. So, anyone who seems to have a concrete definitive answer is really overestimating the value of their opinion.

Before going on, though, I want to nit-pick at your argument above, but in a way that I hope will add to your confusion. It’s a confusing subject, and if you’re not confused, you’re not understanding it very well. :wink:

Your question begs the question of “what is me”. The you that is you can be only you. The you that is you is IMHO something created by your brain, your experiences, your understanding, your sensations, etc. Your body and brain create a subjective experience; that’s the “you” that thinks and feels. You can’t move that experience to some other body & brain.

Furthermore, (and perhaps this is off the track), we all feel a sense of continuity, that the “me” that is “me” is the same “me” as I was yesterday, but with additional memories and lessons learned. IMHO, this is an illusion created by memory. If we were to duplicate you, there would be two “yous”, and both would feel like the “real you”, and both would be. (Our bodies change 90% of our atoms every 10 years, so it’s not our atoms that make us “us”.)

OK, back to free will. One very good theory is that our minds are basically thinking machines, which operate according to the rules of biochemistry (basically, physics and the chemistry of biological systems). Given the same conditions, all identical copies of a machine will behave the same (with some random variations due to physics/chemistry/whatever). Ergo, no free will. Determinism.

That’s a theory. It’s very hard to prove, but I’m pretty confident that it’s essentially correct. However, I think there’s a different point of view that’s important, which sheds a (confusing) light on the free will issue.

Back to the “what is you” thing. The “me” that is “me” is subjectivity. That is, I experience stuff. Blue looks blue, even there is no blue substance in my brain making it so. The personal point of view is absolutely distinct from the objective point of view.

Let me hone in on that a bit. We could study the bejeezus out of the brain, and never know what it looks like to see blue, if we hadn’t seen blue ourselves. Ditto for how pain feels, how pleasure feels, etc. Even if everything about the “mind” (our subjective experience of being a person with a brain) is totally caused by a biological computer (“brain”), there are aspects of the mind that are subjective and not observable by studying it from outside.

The “blue” the scientist sees on his brain imaging and interpretation device is totally different (in perspective) from the “blue” that you the experiment subject sees on the screen in front of you. One causes the other, but the other is something transcendental: something that emerges from the physical reality. (Nothing supernatural going on here, just that “shit happens” and “shit” can be truly amazing, like feeling love, despite it all being made up of boring unfeeling atoms.)

Back to free will. From the objective point of view (assuming the brain-as-computer hypothesis), there is no free will.

But from YOUR point of view, the subjective you, you totally have free will. Want to test that theory? Go ahead, make a choice. Do it for any reason you want. Your sense of free will is as real as the blue you see in the sky. There really is no blue in the sky, there’s really a set of frequencies of light; it’s only blue in your mind and my mind.

That’s my theory.

BTW, there are two sides to the free will question. One is “what is reality” – basically, metaphysics. The other is ethics, or “what should we do”. In any ethical discussion, it’s important to assume that one has free will (for oneself, that si). To do otherwise is fatalism, which has pretty dismal consequences.

In other words, there may be no free will, but even if not, you’re better off being the one who is determined by fate to be successful (in whatever terms you choose), than to be the one who fails. So do whatever you can to be the former. Good luck!

BTW, for people who don’t buy the “brain is a biological computer, and our minds are created by it” hypothesis, the free will issue isn’t a problem. If you were someone else, then well, you’d have that other persons memories and everything, but there might be some essential “you” – perhaps a supernatural essense of some kind – that adds to the equation, and now that you and your neighbor have switched places, you might behave differently.

I think that’s hogwash (rejecting the brain is a computer hypothesis), but a lot of really smart people disagree with me.

I’m running roughshod over a lot of technical distinctions but I hope you get the idea.

Well, yeah. Not infinitely, but a large number of monkeys. They evolved, by a mixture of randomness and the pattern-formation that follows any energy flow. (I’ve mentioned sand-dunes forming orderly patterns of ripples, when wind passes over fields of sand. The order arises out of chaos because of the flow of energy.)

In due course, a subspecies of monkey got smart enough to use words in a highly sophisticated manner, and one of them took it to the pinnacle of the art-form.

The information did not exist at the time of the Big Bang. The information arose, mostly due to the large amount of energy flowing through earth’s biosphere.

I believe it was inevitable but not predetermined. Normally those are used as essentially synonymous; but I am making a distinction here. (I am reminded of the Demetri Martin joke about how normally “I’m sorry” and “I apologize” mean the same thing, but not at a funeral.) That is, none of the events were avoidable, but due to the random element, they were not all predetermined either. If you had an infinitely powerful computer that had all possible information available to it, and somehow got around the issues with Godel’s theorem or whatever, you still could not predict what would happen due to yet-to-occur random fluctuations. But what would happen would happen, with no avoiding it.

Agreed. This is what I meant by calling free will an illusion. (Related to this is what I consider the illusion of consciousness, even thornier.)

Jeff, I think you have grappled with this question most admirably; but as this is a debate board, I am going to pick on the parts I disagree with. If you and I are correct about the metaphysical question, which I think we almost certainly are, then it is irrelevant and pointless to make statements like “it is important to assume that one has free will” or to talk about fatalism having “dismal consequences”. What is going to happen is going to happen. Some percentage of people will act as though they have free will, almost surely the majority, and some other percentage will be fatalistic. And some people, like me, will be fatalistic part of the time and then push that out of their minds and act with the illusion of free will the rest of the time.

This strikes me as a secular version of Calvinism, which I have always found strange and paradoxical. One will always, inevitably, “do what they can”, since at any moment there is only one thing (or collection of simultaneous thngs) they can do.

Not so fast. :slight_smile:

I originally understood your comment to clearly state that the brain cannot be a Turing Machine :trade_mark: because a “Turing machine T cannot express some theorem t, while I can.”

However, more careful reading made it clear to me that you mean that TM1 can make clear statements about TM2 without any fear of paradoxes because nothing self-referential is happening there. Ok, but this is not the point.

The point is that I can clearly understand your logical construction: “you can’t consistently assert the sentence ‘Bremidon cannot consistently assert the truth of this sentence’.” Even though this is self-referential and leads to paradoxes, I can still make truth judgements on it. I can guess. I can understand your point of view. I can creatively take a meta-view on the construct without any prior “programming” from anyone else. A TM would have to be pre-programmed to be able to do any of these things, and as we know, this only moves the goalposts: every time we close any gaps this way, we introduce new gaps at our newly constructed metalevel.

Let me counter-example. If the mind is truly a TM, then it must be possible to introduce information or thought patterns that would cause a “never stopping” state or a “crashed” state. In fact, in most TM’s, avoiding these states is pretty darn hard. And somehow, the number of people who suddenly just stop (leaving out biological situations like strokes and death), is remarkably low. I see this as strong evidence – certainly not proof though – that there is something else at work here.

Finally, Penrose made a series of strong cases for showing why there may be more than TM out there. Quite a bit of the “consensus” comes from people with a vested interest in believing that the mind is TM. If I’m drawing research money based on the idea that I can make a human-like AI using current computer technologies that are, in fact, TM; then, of course I am going to be very resistent to any ideas that would doom my project from the start. As for me. I cannot fathom why anyone would want to believe that the mind is nothing more than a TM, as that would negate free will, and relegate our right to existance to the same level as the computer I’m writing this on.

All true, but you can’t assert it. And the argument is, since a Turing machine T can’t assert a theorem t, while I can, my reasoning powers are superior to the Turing machine’s computation. But that’s simply wrong; if that were the case, my reasoning capacities would be superior to yours, because I am able to assert the sentence (and likewise, yours would be superior to mine, because obviously, there is an analogous sentence referring to me). Furthermore, a Turing machine could express a sentence you can’t, and therefore, would be superior again. (A good book on this subject is Torkel Franzén’s ‘Gödel’s Theorem: An Incomplete Guide to its Use and Abuse’.)

Perhaps a simpler way to see this is to note that of course a Turing machine can prove Gödel-type results just as well as a human mathematician can—if all else fails, by exhaustive search, since the proofs are finite in length. This is not a human-unique capacity.

Now, when you’re saying that the Turing machine could have no ‘understanding’ of the sentence, could not contemplate it or appreciate it the way a human could, then you’re making a different argument—one more in line with the arguments of John Searle (Chinese Room, etc.), for instance (and note that while Searle does not consider it plausible that the human mind is computable, he also considers the Lucas-Penrose argument to be false, so it’s not the case that only people having a vested interest in computational theories of mind reject the argument).

Human minds crash all the time—we brainfart, forget what we were going to get out of the kitchen, can’t recover salient information from the proverbial ‘tip of the tongue’, and hundreds of other similar errors. It’s just that we (usually) have very good recovery mechanisms; but then again, evolution’s had some hundred million years to perfect them.

Well, the existence of free will is completely independent from the question of whether or not the human mind is a hyper-Turing machine. Even in Penrose’s Orch-OR, the collapse of the wave-function is essentially random—but this makes decision making not free, but merely random, as well. The question of whether there is free will does not hinge on determinism or computationalism, but, as Mijin has explained in more detail, on whether the combinations of the concepts of ‘freedom’ and ‘will’ makes any good sense at all, in any possible way the world could be. And so far, nobody seems to have been able to come up with a possible way for the world to be such that this is the case.

Furthermore, it’s not about what I want to believe; it’s about what it makes sense to believe. And according to all currently known physics, there are no hypercomputational aspects to the world—this is the reason why we can use our current theories to make any predictions at all, since a prediction is a finite-length derivation from the theory’s assumptions, which is by definition computational. Think about how a non-computational prediction would look: you’d claim that, say, some fundamental constant k has a certain value x; but when asked how you came up with this, you could not produce an answer, as this would require giving the steps of reasoning you took to get there; but this would mean to produce an algorithm that can be followed by a Turing machine. This would then make the world one in which no science as we use the term would be possible; rather, we’d only have oracular (here in a double sense) predictions, and no means (other than sharing the same non-verbalizable intuitions) to tell valid ones from invalid ones.

Warning: unexpectedly long rambling post ahead. I only meant to give a couple of quick answers, but really got into the swing of things. Feel free to ignore the boring bits.

I will pick that book up ASAP. In the meantime…

You are inadvertently being unfair by comparing two seperate scenarios. In Scenario 1, you have TM1 contemplating a self recursion with TM1. In Scenario 2, you have TM2 contemplating some statement about TM1. That second scenario is, for the purposes of showing how minds overcome paradoxes in the self-referential, irrelevant. In order to compare the two, we would have to ask how TM2 would fare contemplating the same paradoxical statement, but this time about TM2. Incidentally, I kind of gave your example a pass last time. You asserted that the statement must be true from TM2’s point of view. However, TM2 is free to choose whether that statement is true or false; do you see why?

Just as an aside, I’m always fascinated by the preoccupation that almost every book and article has about the self-referential paradoxes that occur where both “true” and “false” cannot apply. They almost always miss the other side of the coin, where both “true” and “false” are equally valid. Usually in that case, the answer is given as “true”, and then on to the next point. They always ask about the Set of all Sets that do not include themselves, but rarely consider the Set of all Sets that do include themselves.

Certainly, to a degree. Each solution introduces a new Gödel-Type problem though, and I assert without proof that a finite TM must eventually fail to handle the problem at some level. Accepting this (which I can’t reasonably expect you to do, because I can offer no proof of my assertion and can only plead to your intuition). we would have to either accept that the mind is some sort of infinite TM, which I find to be implausible, or that thinking about these problems should possibly cause our minds to crash.

Any time we try delving into these themes, I suppose one question is bound to arise: “can a machine ‘feel’ / ‘understand’?” I’m confident that this question is unanswerable at this time. Certainly if we were able to prove that the mind is a TM, then we would have a conclusive answer. If were we able to prove that the mind is hypercomputable, we would not have a conclusive answer, but it would certainly start trending towards “no.”

Yes, Searle is one of my influences, and it pains me that two of my major influences in this area don’t seem to agree with each other. Keeps things interesting, anyway. However, I did not claim that all of Penrose’s critics (on this theme) are arguing from entrenched self-interested positions; I only argue that many are protecting their own hard-won insights against an idea that would call entire careers into question.

That’s not what I meant! A simple error is not a crash and is certainly not an infinite loop. Obviously we can recover from simple errors; that is self-evident. What I am suggesting is that it should be possible to set things up so that we get the equivalent of a “Blue Screen of Death” (without meaning the real biological death). I’m not entirely certain that I buy the evolution argument for one simple (and rather snarky, I admit) reason: Nature was rather far-sighted when it gave us the ability not to crash out when dealing with Gödel’s insights! That’s a pretty far cry from being able to count oxen, or decide which cave is safest. From an evolutionary standpoint, when Gödel first realized what his insights meant and started following them layer for layer, someone should have found him drooling at his desk, staring out into nothingness, waiting for some other Power to hit the Ctrl-Alt-Delete.

It is equally unclear why it should be necessary for us to feel like we have free will in order for the world as we know it to exist. Using Occam’s Razor, if free will is not necessary, then feeling like we have free will is certainly not necessary. To believe otherwise is to accept a complicated shadowplay where creatures who have no say in the matter and are already doomed to follow a deterministic path to the end are fooled by an elaborate fakeout. I don’t try to fool my computer into believing it has a free will in order to start Excel; it’s just not necessary.

This is a bit of a circular logic, although I suspect you already know this. We use computational methods to form a scientific explanation of our reality, so we cannot be suprised that the explanation involves computational methods! Still, it’s a serious point, and I’ll see if I can figure out where we agree on things.

I agree that computational methods have been exceedingly useful in understanding reality.

I agree that the current scientific community is not at present seriously looking for hypercomputational methods.

I agree that using “hypercomputational” is merely a placeholder for saying: there’s something out there we don’t understand. If we knew what we were looking for here, we would have already found it.

Now my counterpoints:

There are plenty of bits in science where our present understanding using computational methods is exceedingly weak. We can’t get our two best theories of the universe to agree with each other. The best attempt so far forces us to conceed a ton of dimensions that we cannot see. We still have no real plan about what the math behind QM means. Or why we don’t seem to experience it in the “real” world.

Speaking of our two best theories, one of them more or less demands a completely unchanging static spacetime universe where all events for all time in all places are set in some kind of 4 dimensional stone. The other demands that at the base of all things almost pure randomness rules. Both of these theories contradict our everyday experiences, and yet we science lovers have no trouble accpting them. If pressed, most of us would simply wave our hands and say that one is for the big and the other is for the small, and don’t pay attention to that man behind the event horizon.

Yes, the predictions by both of these theories are incredible and very useful. And yes, both use computational methods to achieve these feats. Combining them is the problem. If we are prepared to add hidden dimensions that we cannot detect to make them work, why not look for hypercomputational methods to make them work?

Or take Dark Matter and Dark Energy. We accept both, because without them, we can’t explain why galaxies spin like they do, or why everything seems to be expanding faster and faster. But honestly, we’ve never detected either. We’ve just plootched down some placeholders into our theories, and hope that someone down the line can figure out what they mean. If we’re willing to do that, why not take a look at our basic assumption that TM is all that we’ve got?

Hypercomputability does not in any way reduce the power of computational methods, any more than Einstein’s theories reduced the power of Newton’s theories. For almost all applications, Newton’s formulas are sufficient, and their ease of use even makes them superior to trying to mindbend around Einstein’s more general formulas. Likewise, we do not have to throw out our computational methods if and when we “discover” hypercomputability. I don’t see why predictability has to be thrown out for when the next solar eclipse happens, even if we discover that our minds have access to some sort of ability for which we currently have nothing but a possible placeholder.

Finally, I would like to agree with you again. As things stand, hypercomputability is about a halfstep up from woo. I think that there’s enough hints to say that there’s something more than TM to the mind, but definitely nothing conclusive. It’s something that I’ve latched onto, because it impresses the intellectual girls at parties and lets me sleep at night without freaking out about just being a machine.

I shall answer you in rambling kind:

Hmm, either I’m not following how you divide the scenarios up, or you’ve misunderstood my point. The original LP argument asserts that since there are statements which a universal TM can neither prove true nor false, yet we can see their truth, human reasoning is superior to TM computation. But such statements exist just as well for humans; I’ve given you an example. In both cases, you have two ‘theorem provers’, which you try to compare.

I’m not exactly sure what or whom you’re denoting by TM2, but it’s of course well known that for any formal system, extending it with either the Gödel statement itself or its negation yields again a consistent formal system (provided the original one was consistent).

But that’s not true: you can give an explicit mechanical procedure that, given a formal system F, constructs a Gödel sentence for F.

I’d question even that assertion: I’d be surprised if you could find a mathematical logician that buys the LP argument, regardless of his views on the computability of mind.

But Turing machines also don’t crash because they’re confronted with a Gödel type problem. And ‘going to the kitchen and forgetting the reason why’ can very plausibly be construed as the crash of the sub-routine that instructed you to go in the kitchen in the first place. Naively, any error in a program causes a ‘crash’: a failure to function in the intended way, an infinite loop just being a specific example of such a thing, which is generally not too difficult to deal with; when an instance of the browser engine in Chrome crashes, it doesn’t take the whole system with it, but is instead intercepted safely.

I don’t think that follows. Feeling like we have free will is certainly constitutive of our actions; just like I do different things depending on whether I’m hungry or not, I may act differently depending on whether I feel like I have free will or not. Feeling like I didn’t have free will might, for instance, induce a condition of fatalism and pessimistic inaction, which certainly would not be desirable, either from an evolutionary or cultural perspective.

Besides, in almost all respects, we don’t feel like we have free will. When I make a choice, I do so (or at least, I tell myself that’s the case) because of weighing different options, and taking that option that comes out the winner. This is simply deterministic: given the options and my preferences, the choice is necessary, or random, should it be underdetermined. Further, a great many acts I perform, I do not seem to have any controlled involvement with at all: breathing, the detailed way I cross my legs while sitting, the way my fingers move across the keyboard typing this, scratching some itch I felt, and so on. The same goes for the thoughts I think and the ideas I have: I have never decided on thinking a certain thought. This is reflected in the way we talk about these things: thoughts occur to me, ideas come to me, and so on. In fact, I’m having trouble pointing out instances of doing something because of my ‘free will’: I often do things because I want to, but wanting to is just a state of the mind determining what I will do, and itself determined by other factors—my personality, immediate surroundings, past experiences, and so on; or otherwise, perhaps random. As I said, it seems difficult here to conceive of an alternative to these options.

Then there’s also always the thread of the infinite regress looming: even if I were to say that I do what I do because I will it, my will has a certain content, and must be determined; if now will is the sole cause of my willing to do something, then will must determine will, since willing something is a mental action as much as any other. But then we have to play the game ad infinitum, as otherwise, will would be determined by something else, and hence, not free.

But it’s possible to use computational methods to find out that something does not conform to them; that’s basically the essence of Gödel’s theorem: while itself not being non-computational, it establishes that the class of non-computational mathematical statements is non-empty. Using computational methods does not entail being able to only find computational facts.

I wouldn’t agree with this: there’s a very well established theory of hypercomputation, going back to Turing. The simplest kind of hypercomputer is one equipped with an oracle, which can be used to decide the halting problem of an arbitrary Turing machine. This can solve all the problems a Turing machine can, plus those reducible to the halting problem; however, it is itself vulnerable to the diagonal argument used to establish the undecidability of the halting problem, so it cannot in general decide whether machines of its class halt or not. To go around this, you can introduce another oracle able to answer this ‘second order’ halting problem, and iterate to the next level; this way, you end up constructing what’s known as the arithmetic hierarchy. Hypercomputation is thus pretty well understood (and a lot of work has gone into elaborating its theory, as well as finding possible candidates to physically realize machines capable of hypercomputation).

But using our present lack of understanding to argue for the impossibility of achieving it (within the currently used computational methods) is just arguing from ignorance; the most you can establish is simply that we don’t know yet whether our methods will suffice to quantize gravity (I think the indications are very good that they do, by the way), or solve any other problem, which is precisely neutral regarding the adequacy of current methodology.

Of course we have: the realization that galactic rotation curves don’t fit expectations, viz. that the universes expansion accelerates, is the detection of dark matter and dark energy. We simply don’t yet know what they are, but that’s an important difference.

That’s understandable (though you and me seem to attend different parties—unfortunately!), but to me, it’s also a bit of the easy way out: finding something that just might fill the need for some ‘extra’ meaning and purpose. I think the real challenge is to find this meaning and purpose within the world as it presents itself to us, not in those parts we haven’t grasped yet in the hope that there might be something there that could account for the ‘divine spark’ (I think) everybody finds themselves longing for now and then. Because the way things are looking, whether we like it or not, there’s every indication that this spark just isn’t there; the gaps are getting fewer and smaller, and no plausible candidate has yet turned up. Luckily, I think that there’s every possibility to live a fulfilled life without it, though it might be harder; ultimately, I think it’ll also be more rewarding.

There is no evidence that a Turing machine couldn’t posit a self-referential statement in the same way a person can. There is no evidence that a Turing machine can’t be programmed to do anything a mind can.

The way we normally program a Turning machine to represent a logical statement is using a formal logical system encoded in the program. That’s an entirely different way than a human brain encodes a logical statement. We know very little about the latter, except that it’s unlikely to be much like the way we happen to do it using a TM (which is generally the simplest way we can think of).

Furthermore, the brain isn’t just one TM. It’s better represented as a huge number of interconnected parallel TMs. While that’s equivalent from the standpoint of computability, it’s not the same from the standpoint of reliability and redundancy. Therefore, any simple statements we can make about the reliability of TMs doesn’t apply in any simple and direct way to the human brain.

We have no good evidence either way to resolve the question of whether the human mind could be implemented by a computer. All we have is the question, “what else could it be?” That’s not a good scientific answer, though that kind of question has certainly driven a lot of good science, in all sorts of fields (regardless of whether the answer is “Oh, it’s X, which we didn’t know about” or “no new phenomena are required”.)

This is big question. There’s equally no evidence that a Turing Machine could posit a self-referential statement in the same way a person can, or that a TM can be programmed to do anything a mind can. My original reference to Penrose was meant to show that there are intriguing hints that TM is not enough to explain what’s going on, but it’s certaily not enough to conclusively answer the question.

As you pointed out, interconnected TM’s are basically just one big TM. There’s no need to differentiate unless you want to get into implementation details. The interconnectedness is equally irrelevant for questions of reliability and redundancy as a single TM can incorporate these things just as well as a network of TM’son a theoretical basis. Practically speaking, things are completely different, of course. Equally irrelevant for the discussion of the applicability of TM to minds is how neurons store information. Which, strictly speaking, we’re not all that sure about anyway.

So we can make statements regarding the applicability of TM to minds using the single TM model.

I’m not following you. Could you rephrase your example so that I can see how TM1 relates to TM2?

Precisely. It may be that my example is not the same as yours, but you nailed it anyway. I still think it’s interesting that most books handling this tend to focus on the contradiction formulation.

I was pointing out that we can plug the hole, but by doing so, we’d have to construct an expanded formal system with it’s own Gödel sentence. And on and on. I have no problems conceptualizing this, but would be hard pressed to formalize it…I’m not even certain it’s possible.

Maybe. I don’t know. What I do know is that my impression is that the most virulent opposition comes from those who have something to lose. Not really surprising but worth keeping in mind.

No Fair! Of course a TM can include check routines and self-protection routines, but there’s no way to generalize this to catch everything, unless we include an Oracle and that only pushes the problem around a bit. I stand by my assertion that ot’s awfully lucky that evolution has prepared precisely those check routines for self-referential problems, even though there’s not a whole lot of evolutionary pressure there. Unless we want to get jokey, and posulate that some early mind variations did freeze out when doing Gödel-type analysis in caves, and we are the descendants of those who did not freeze out.

You are skating around my point. You are saying that free will must be faked so that we decide to do the right (ie. evolutionary advantageous) thing. But if we decide, then the free will is not faked. It would just be a lot easier for everyone involved if we dropped all that emotion and decision and free will stuff, and just followed along some particularly clever algorithm.

I agree with you that the whole idea of free will is slippery. If we’re prepared to throw in some fake free will (I am not, as stated above), then it’s certainly possible to argue that any decision I make is based on choosing the best alternative given the information I have. Of course, this is assuming that we have a TM type of state that we’re working from, so that any decision we make will always be identical regardless of how many times we make that decision with that starting state. As I stated above, I think that it requires a bit too much handwaving to throw in fake free will.

I’m not sure that I follow. This may be because it’s 3am here, but I’m pretty sure that although we can show that there are true non-computational statements, we cannot actually compute which ones they are without forming a new extended formal system.

This is not how I mean hypercomputability, although I am aware of the history of the term. I am just trying to avoid using “soul” as a placeholder for obvious reasons.

I was pointing out that it’s odd we’re prepared to accept the idea of many dimensions we cannot see and may never be able to see, and yet we’re a bit more shy about questioning whether our methodology may need revising. This is not a perfect fit, but it kind of reminds me of Ptolomy’s circles. Yes, we can patch things up in our present understanding so that everything fits our observations, but at the moment things seem to be getting more complicated rather than simpler, so maybe it’s time to take a closer look at the basics.

Correct, and from my viewpoint perfectly valid. Throwing in a placeholder because observations don’t quite fit expectations is perfectly normal, even if the placeholder eventually disappears. This is how I feel about the extension of TM as to the observations of the mind.

Yeah, unless you get out to Germany. Although I do seem to get around to very different kinds of parties; I somehow ended up at a biker party on Friday to listen to some friends of mine who did a few sets. As for things getting narrower for that ‘spark’, I’m not worried about that. It’s barely over 100 years since science was declared to be all wrapped up except for a few details.

What I can’t fathom is how the desire to believe something is relevant. I have said that I find the “no free will” thing kind of depressing, so ordinarily I avoid thinking about it too much. But I don’t convince myself it’s wrong. Same goes (and this I encounter much more often) with religion and especially the afterlife. “Why would you want to think when you’re dead, you’re dead?” “Uhhh…well, eternal happiness and being reunited with dead loved ones up in the clouds after I die would be awesome and all, but that doesn’t help me to actually, you know, believe it to be true.”

But “some other power” does hit it: the sensations coming in from his senses! Getting hungry, thirsty, his posteriour getting sore from sitting in the same position, etc. Or maybe it’s just a wife or colleague asking if he wants to go out for lunch.

But lots of things are not necessary, and still evolved. Evolutionists call them spandrels IIRC.

This is true too–good point. Your interlocutor even said he needed to feel like he had free will to get out of bed in the morning (or something along those lines).

The times I find myself most closely approximating something like free will are when I am vacillating over whether to, for instance, bring along an umbrella on a day when precipitation is considered 20 percent likely. I have at times done something like start to walk out the door, then go back and pick up the umbrella, then halfway back to the door I stop and turn around to put it back. Clearly I am torn and it is a very close decision, and I change my mind a couple times without new information. I am deciding. I am WILLING MY DESTINY. But in fact, way deep down I still think that is just a dance of atoms in my brain that I have no control over.

Hmm, I may misunderstand you, but that’s actually been done. (Just google ‘automated proof of Gödel’s theorem’ or something similar and you’ll get numerous results like this one.)

Well, that’s the problem: I don’t have a TM1 and TM2. In the LP argument, you have Turing machines and humans, and attempt to establish that Turing machines face a limitation humans don’t. I’ve just pointed out that the same apparent limitation exists between humans and other humans (or conversely, between humans and TMs).

I don’t see that it had to: ordinary Turing machines don’t crash because of encountering self reference or logical paradoxes (this isn’t Star Trek, and Gödel’s not Captain Kirk!).

No, free will and decision making are not the same things. Decisions are made (or may be made) by drawing a conclusion from a set of premises; given the premises, the conclusion, and hence, decision is necessary. I merely pointed out that ‘believing in free will’ may be a factor in such decisions.

You alleged that the apparent computability of the world may be an artefact of the computational nature of the methods we use; but it’s precisely Gödel’s theorem that shows this not to be the case, since it’s arrived at using computational methods, while establishing the existence of non-computational statements.

But if I’m not mistaken, it’s how Penrose uses the term, no?

That’s a bad analogy: the additional dimensions are a prediction of a certain (highly speculative) theory (actually, they’re more a condition for the consistency of that theory), while nothing predicts hypercomputation (of any sort).

Which funnily enough happens to be just where I’m at. (The only biker party I’ve been to a couple of times is always on the first weekend of August, though.)

Apologies to Slacker…with these long posts, I really only have to time to answer one of you, so I hope my answers to Half will suffice. (Although I would like to point out that I can get up in the morning just fine…I just may be a bit tired from obsessively running these ideas in my head all night :wink: It’s just easier to believe there’s just a bit more there than Intel Inside.)

Ok, I’ll do that. Although this only works if you start with the assumption that the mind is a TM. Otherwise my statement stands.

But a human doesn’t have this limitation. We can easily hold an unprovable concept in our minds and even make predictions based on them. If we run into some sort of computability problem, we just jostle around the rules a bit to make it work. It’s tough for me to see how any sort of TM could be generally productively creative in this way.

The only reason they don’t crash or hand is if they have something to catch that case…otherwise that’s exactly what would happen. It would not be hard to put in a specific catch, or even something that catches some more abstract category of cases, but we know that a generic catch is not possible.

Precisely. Decisions do not require free will or even fake free will. Occam’s Razor demands that we cut it out then, or explain why it’s there if not for decisions.

Ah. Well it looks like we agree here, but I am not exactly sure what point we were originally driving at.

Possibly, but it’s not exactly what I mean. I interpret his argument to mean that there’s some form of “logic” that we don’t understand. By this new “logic” I mean something akin maybe to an Oracle, but maybe not. I don’t really have an answer for you here. Of course, if I did, I’d either be polishing my shelf for that Nobel Prize or I’d be running my own religion by now :slight_smile:

Not quite true. Those additional dimensions are not predicted by the theory…it just turns out that the only way to get the theory to work is to start with the assumption that there are other dimensions.

I hang around Berlin, where are you at?

Huh? You claimed that Turing machines could not produce Gödel type statements, when actually, they can. Did I misread you?

The argument (as I understand it) is that a Turing machine can’t assert the truth of the Gödel sentence, when a human being can. But there are statements that any given human similarly can’t assert, while another human—or a Turing machine—can easily show it to be true.

Why do you believe a Turing machine would crash upon encountering a Gödelian statement? I don’t see any reason for that.

But believing in free will may simply be beneficial to an organism, even if there’s no ‘true’ free will—as I said, otherwise, one may just lapse into fatalism. And again, at least on my part, there are preciously few actions that I would claim to have done ‘freely’; rather, I can generally name reasons for my actions.

I said that so far as we know, the world appears wholly computable, which you alleged may be due to the methods we use being computational.

Well, the extra dimensions pop up in order to cancel a certain anomaly (the failure of a classical symmetry to be preserved upon quantization, in this case the conformal symmetry on the string’s worldsheet). It’s a prediction at least in the sense that ‘if string theory is true, then the world must be ten dimensional’. Actually, even on the classical level, you can’t formulate a superstring lagrangian in arbitrary dimensions, being limited to dimensions 3, 4, 6, and 10; the 10D case is then picked out by quantization. Of course, this is exactly what you’d want from a candidate ‘fundamental’ theory: to pick out the characteristics of the world with necessity, such that things could not have been otherwise. Sadly, string theory is otherwise very much lacking in that department (and obviously things would have been much nicer if it’d predicted the number of dimensions we actually observe).

(The above mainly holds for so-called ‘critical’ string theory. You can formulate ‘non-critical’ string theories in pretty much any dimension, but they’re not generally physically attractive—you loose Lorentz invariance, I think. But I think there’s been some advances in this field in the past years, I haven’t really kept up with the development though.)

Düsseldorf, but my sister lives near Berlin, so I’m there a couple of times per year usually.