Free Will - Does it exist?

Note that I think there was a misunderstanding: the term “evaluation engine” was used in the sense of “that which performs evaluations”, not as “what standards does one use in evaluations”. In other words, it raises the spectre of Russell and Godel – turning the thing doing the evaluation back on itself.

I guess I don’t see the problem. We know *a priori * the rules of arithmetic which justify the claim that 2 + 2 = 4. If that isn’t good enough, then you can always take the Peano axioms, supplement them with a definition for addition, and you get 2 + 2 = 4. If you want to justify that, ask a mathematician.

I guess I don’t see how the involuntariness of belief bears on any of this. A(n epistemically) well-functioning person forms beliefs on the basis of the best evidence, etc., even if this belief-formation isn’t voluntary. How does it help matters if belief-formation is voluntary? If I am trying to decide between incompatible propositions, I should still decide to believe based on which is supported by the best evidence. The result is the same in either case (barring irrationality, which is not ruled out by any theory of belief formation).

As for how we can know that our evaluations are correct, or whatever, that is a skeptical problem that would arise even if belief-formation were voluntary: how do we know we are capable of accurately evaluating the evidence when choosing our beliefs?

I guess I don’t know what else to say. You may think me dense, but if the above doesn’t address your question, then I’m afraid I don’t know what your question is, and I need it explained again.

Hello mwbrooks,

I’d like to explore this argument with you just a bit. (We can set aside for the moment that it is a fallacious appeal to consequences.) Isn’t someone also giving up whatever free will they had if they become persuaded by any argument no matter the topic? Is there something about free will arguments that is especially dangerous to free will?

I used to be a hard core believer in free will. It was part of my whole angry young man complex. But after some serious evaluation of the logical arguments, putting aside my emotional reactions, and fitting it in with the rest of my science-based understanding of the Cosmos, I became convinced that we experience only an illusion of libertarian free will. I am not aware of any diminishment of that experience in the years since. Do you think I may be fooling myself in this regard?

What is it you fear your life would be like if you were to become convinced that libertarian free will doesn’t exist?

PC

Thanks. I’m glad I’m not the only one who had this reaction to mwbrooks’s post.

No.

2+2=4

but 2+2+x=a different result.

Certainly. But you have to make choices, don’t you? Even if your choices are predetermined by your experience and your brain chemistry, they aren’t all easy, are they? And every choice you make affects your brain chemistry. It becomes part of the history that gets rolled up into the next choice.

What I’m saying is who you are, or more to the point, who you think you are, is not who you have to be in the future. Every choice you have to make is an opportunity to be someone a little different than who you thought you were. And every time you make a choice in that direction, it becomes easier.

I realize the irony here. By heeding my argument, you “give in” just as much as if you give in to the folks who say you are what you are and tough luck. I don’t think I’m arguing that “libertarian free will” exists. I’m perfectly willing to admit that I’m injecting thoughts into your head that could affect your actions in a totally deterministic way. Sorry about that.

But I like to think I’m arguing for your part. Throw it in the mix when you make a decision–will this action help you be the person you want to be? Or will you just be fulfilling an expectation that’s been laid on you by your parents, school, peers, or whatever? I think one will might feel a little freer than the other, that’s all.

Leaving aside whether this is indeed the case, is that really a justification, given that, as per current doctrine of physicalism, my fundamental intuition of self is supposedly illusory, why not my a priori intuitions on quantity?

But with involuntary beliefs, you aren’t deciding anything.

It’s as self-evident as the sense-of-self. Maybe there’s also a case of incoherence to be made*, but the question is over whether one should believe NFW as the case. If one believes NFW, then one also has to believe that they arrived at that belief involuntarily. There’s no reasoning involved on their part, hence irrational.

*Assuming free will, being skeptical over one’s reasoning faculties is incoherent, because recursion strikes again. If my reasoning is no good, how reliable is my path to skepticism?

So, is that a yes or a no?

When you said…

Were you assuming that ones own thinking is always right or advocating that one should stick to ones own thinking no matter if it’s right or wrong?

Quite the contrary. I’m saying you can be right more often and think better if you take ownership of your decisions. Sure, maybe your decision is preordained in some molecular sense, but you still can’t know, and nobody can tell you, what your decision will be until you make it.

I’m saying that the reality of fate doesn’t mean you should adopt a fatalistic attitude. I mean, if you’re happy with who you are, if you’re content with your life and satisfied that you’re making all the right decisions, well, fate’s been pretty good to you.

But people aren’t always happy with themselves. They make a few bad choices, and then somehow bad choices get to be a habit. The danger at that point is they’ll let themselves be told, or conclude on their own, that they’re just bad, or unlucky, or dumb, or whatever, and continue making their habitual choice because it feels easy, and anyway, it’s their fate, isn’t it?

I’m just saying you don’t have to always make the habitual choice. Every choice you make is another chance to think a little harder, gather a little more information, mix up that damned brain chemistry some and see if it won’t give you a better answer this time.

Even if it’s only the illusion of free will, even if it’s all just an equation, stay in the equation. Don’t just sit back and watch.

I believe that technically there is a you, and a me. If anything, that there’s more to you and me and everyone than we think, in that a lot is going on behind the scenes.

Imagine an empty room. The shape of the room, the size of the doorway, and so on represent our biology, our initial setup. As we go through life, bouncing balls might fly through the doorway or a window gap; depending on our room, they might just bounce off of the walls. And when we start getting a lot of them, bouncing around in the room, they themselves might stop others getting in. And of course depending on the room setup they might not be in there for very long.

Despite that we don’t have any control over what balls, or what personality quirks/things that make us us make it in, I would say that the room and what it contains is analogous to “I”. We might be deterministic machines, but if we aren’t totally aware of the room and its contents, then we can’t tell that. But we’re still “I”'s.

I think the stumbling point with FW discussions is our own self awareness. That, and a deeply seated fear of accidentally discovering that we are, in fact, more or less meaningless.

I’m not up enough on the theory do do this myself, but it should be possible to show through formal logic (e.g., Godel’s incompleteness theorem arguments) that it is impossible for a person to prove whether or not he himself has free will with no references to an outside system.

However, one logical system can certainly give a correct answer about another; just as if I write a computer program to solve a mathematical problem. The program has no free will, it gives a result that is inevitable given the inputs, but the answer is still meaningful in that it states some sort of truth that was not as accessible before.

It should be possible to determine if some arbitrary entity X has free will or not by careful study; but never if the “studying organism” and X are the same. If X is another person, I can at best generalize to say that I probably don’t have free will either since X and I are so similar.

Well who was suggesting such a thing?

I don’t think that’s quite right. The incompleteness theorem says that within any but the most trivial formal system, it cannot be both sound and complete. Thus, to make the claim, it must be assumed that (1) “consciousness” (for lack of a better term) including free will can be represented as a formal system and (2) that “consciousness” must be both sound and complete.

I’m not passing judgment on either of those assumptions, just pointing out that they’re assumptions (and a pretty high bar to overcome).

That’s not quite right either: see the halting problem. It would be more accurate to say that one logical system that is more powerful than another may be able to yield a correct answer about the other, but it’s obviously a point of contention as to whether such a system is possible (relative to consciousness).

Sure - I mean, it’s the hideous complexity of the whole thing that prevents us just simply measuring whether there is, or is not, free will.

I guess I was. At least, I’m suggesting that there are people who do that (sit back and watch). I doubt anybody who frequents this place would. The whole forum reeks of piss and vinegar.

I guess I went that direction because somebody asked what I fear about determinism. That’s what I fear. That people might misinterpret the notion that free will doesn’t exist and decide that they have no choice in how they behave.

I think there’s a very real possibility of that - if you’re told that you have no control over your actions, at all, then it’s quite easy to conclude that you also have no responsibility for them, or that nothing you do actually matters, because you have no choice about doing it.
It’s also possible that someone confronted with the idea might come up with an ‘I refute it thus!’ response that involves some kind of harm, either to themselves or others.

Not saying we should sugar coat the truth (if we could even be sure we had the truth in our grasp), but appealing to consequences is not necessarily always a fallacious response.

I don’t know why physicalism has to say the self is an illusion. The physicalist merely says that the non-physical self doesn’t exist. But that is hardly a universal ‘intuition.’ Many cultures through the years (including early Christianity) were materialists; it was only the unfortunate influence of Platonism that made Christianity dualistic. So I don’t think that the (admittedly common) belief in dualism among Americans is on par with our intuitions about arithmetic.

Sorry; you misread what I was saying. I was saying that if belief is voluntary, then you ought to decide based on the evidence. My point was that either way, your belief is (ideally) a response to the best evidence, so it’s not clear there is an important distinction to be made between cases of voluntary and involuntary belief formation.

I’m going to divide this response into two posts.

Any attempt to justify the reliability of our reasoning apparatus (even if it is by appeal to ‘self-evidence’) will be circular; the voluntariness of belief doesn’t help here at all. Besides, I reject your contention that there is no reasoning if belief is involuntary. I have already argued that if there is no free will, there can still be deliberation, and deliberation serves a point. So does reasoning–it allows us to form true beliefs about the world which can then form the basis of action. I would argue that there would be strong evolutionary pressures to create a cognitive system capable of accurately representing and reasoning about the world; for false representations of the world, and an inability to reason (which is so essential to the development of everything from tools to complex societies) would surely be disadvantageous from an evolutionary standpoint. Of course, the whole process is determined, but as I argued above, having an involuntary belief *caused * by the best evidence and voluntarily *choosing * a belief based on the best evidence results in the same thing: a belief based on the best evidence.

Well, I think the argument for the unreliability of one’s reasoning faculties would take the form of a reductio. “Reasoners accept that reasoning is reliable. But using reasoning, we can show that we should not accept reasoning! Therefore, reason is self-defeating.”

All accounts of physicalism that I’ve encountered reduce the self to other entities i.e. brain emergence…etc. So, per physicalism, the self does not really exist akin to the “elementary simples” which really do. Instead, it’s a placeholder term like “university”.

Self-evidence is not justification, it’s more like observation i.e. you see a green apple, and if I ask you how do you know you’re seeing a green apple. There’s no justification involved there.

How? Per NFW, your belief that “there can still be deliberation” is involuntary. Maybe it’s true, maybe it isn’t.

All of the above are involuntary beliefs. We all know that bored green turtles made us this way between commercial breaks because surely that’s their favorite pastime. :smiley:

Yes, it would result in the same thing, but with that whole involuntary rider, whether indeed the best evidence is causing beliefs can’t be determined.

How is it shown that we should not accept reasoning?

I can’t respond to everything that is here now, but here is a quick thought.

Self-evidence is the clearest example we have of involuntary belief. If something is self-evident, you can’t help but believe it. So if you can help yourself to this sort of justification, surely I can too! That dissolves many of your further objections about how I know the reliability of reason.