Free Will - Does it exist?

One point I had hoped to make in that thread is that it is sensible to use those words - but only subjectively. Your decision to assign responsibility to the nail for your flat tire doesn’t make the nail objectively responsible. I might assign responsibility to the lazy carpenter who didn’t clean up his spill. But my assignment would be equally subjective. We might leap-frog our assignments all the way back to the big bang. In general, we make the assignment that has the most useful result (to us). For me, it might be influencing carpenters to be more careful.

Fair enough. These discussions do typically hinge on definitions. ETA: For example, I wouldn’t consider caused and is responsible for to be exactly synonymous.

But the question of whether belief-formation is voluntary is a separate question from the free will question, which is primarily about whether our actions are voluntary. Even most libertarians about free will concede that beliefs are not under our voluntary control. I would steal an example from William Alston, but instead of stealing I’ll just quote him:

Well, I explained that: if you have a standard of a desired goal or good function or , then you can evaluate anything, regardless of whether it is responsible or not. You can even evaluate the performance of inanimate objects: this computer performs well or poorly; this car performs poorly at high altitudes; this knife cuts very well; etc. Obviously, no free will on the part of the evaluated object is supposed. And so if you have some sort of standard of good epistemic function (i.e., behavior that is likely to produce true beliefs or avoid false beliefs, etc.), then you can evaluate whether someone is performing well or poorly regardless of whether the person is responsible or not. Hell, you can evaluate a computer simulation (say, one intended to predict weather patterns) or any other information gathering device by the same standards. Voluntariness is not required.

Understood. I can’t conceive of an alternative to the three types of causation I have mentioned (causal determination, probabilistic causation, random indeterminism); and you are not convinced that these three options exhaust the space of potential types of causal relations. Like I said, I don’t have any other arguments to offer, so sadly, you will just have to persist in error. :wink:

As I said right back at the start - I think any alternative would have to be metaphysical and irreducible. (and by ‘irreducible’, I don’t mean irreducibly complex, I just mean not meaningfully dissectable into components).

I can imagine it, but that doesn’t count for anything (and I wouldn’t dream of expecting it to).

Of course, at the end of the day, I am only rejecting libertarian free will. But that seems to be the definition that most people intuitively work with–thus the assumption that if determinism is true, then we have no free will. But in reality, I’m a compatibilist, so I think that determinism is true *and * we have free will. But that’s probably another thread. (Actually, to a large extent, that is the PC apeman’s thread.)

People should probably start these threads by defining their terms.

Ayup. Although after some initial confusion, this thread seemed to settle pretty nicely into a discussion of libertarian free will. So all’s well that ends well, I guess.

The thing is, if it makes decisions for us then it must be divisible, if not physically, then procedurally. Because no matter what’s making the decisions, whether it be spongy grey matter or ephemeral spirit, some part of the decision making process will be an analysis of the available data in some sort of methodical, systematic manner (the deterministic part), and some part of its decision-making process will be varying independently of the available data, ie, randomly. So, even if the theoretical thing itself is composed of some irreducible material, the process of thought itself can be reduced to its component parts, which leaves no room for libertarian nondeterminism.

I think all you’ve done there is consider a larger set of components, including the one I was talking about - which would (have to be) some indivisible ‘will’ that is capable of being an origin of complex ideas, rather than just a little switch that says ‘yes’ or ‘no’ to external stimuli. - something that is capable of generating inspiration.

I realise imagining such a thing isn’t in any way equivalent to demonstrating, or even making a strong case for, its actual existence.

Would these be complex ideas based on, or not based on, an analysis of available evidence? Or some mixture?

I recognize that there’s a desire not to look to closely at how supposedly spiritual things might work, lest the bubble pop or something. However, that doesn’t change the fact that the behavior of such things is subject to analysis.

I don’t know why you keep asking that - no.

I’m talking about what would be an object that generates new ideas - innovative thoughts and decisions - which would go, after formation, into the mix along with the analysis of external data, etc,

If you recognise that in me, I think you’re simply mistaken.

It’s pretty hard to form ideas without external concepts. Like, take the innovative thought “I’ll try putting cheese on this hamburger!”, and then remove the external concepts, like ‘cheese’, ‘hamburger’, and ‘putting’. And ‘I’. And ‘try’. And ‘on’. And ‘this’. All of these are concepts that rely on knowledge of the external world and how things relate within it. So, what’s left? Answer: not much.

By my analysis, there’s no way this ‘innovative spark’ could possibly produce anything useful and nonrandom without having access to the knowledge base and the ability to rationally process the data within. In which case we’re back to the deterministic part and the random part, with nothing else left over.

And, the reason I keep asking it in the hopes of coercing you to look at my analysis of the subject and either accept it or find a flaw in it. If you accept it, I can feel good about dispelling ignorance. If you find a flaw, than I can feel good that my ignorance has been corrected. So, it’s all about me, me, me. (Hey, I never claimed to be an altruist.)

Conceded that I’m no mindreader. I’ve seen it in others, though; heck, “the Lord works in mysterious ways” is not exactly a battle cry for intense analysis of the deity in question.

That’s quite a good point, and I see what you mean now - certainly the object could not then be unaware of the external world, but is aware necessarily the same as ‘influenced by’? Certainly it’s hard to imagine that the choice between chocolate and vanilla would be based on simple novel desire, but what about flashes of inspiration and invention? Or would those just be considered random?

Well, if it’s aware of but doesn’t allow such knowledge to influence its decisions, that’s functionally the same as being unaware of the knowledge - which probably isn’t going to give you what you want.

I’d think that flashes of inspiration and invention are most likely actually the result of the subconscious mind quietly assessing things in the background, until it manages to put together some knowledge you have together in a way that’s so interesting, that it pops to the forefront of your mind. This could be an entirely deteministic process; simply hopping from one thought to the next in a stream-of-consciousness manner until it finds or assembles an interesting conclusion.

If randomity plays any hand in thought and decision making, it would presumably not do so in any directed manner; rather it would just shuffle the order that you recall things a little, or introduce small errors into your internal assessment of the value of things. (Clearly it doesn’t introduce large errors, or such are corrected for automatically by the mind, since people don’t act that random.) If such randomity contributed to the formation of innovative ideas, it would just be an accident perturbing the natural thinking process, not anything really interesting from a free-will sense.

I’m not sure what “free will” means. Does it mean my actions can shape the future? Does the future even exist? I think someone said we can’t fear the future because it doesn’t exist; if we fear anything, it’s a repetition of the past. So if I decide to act based on a past event, I’m not shaping the future, I’m shaping the past that is yet to come.

Man, that sounds like total BS, doesn’t it? I just had to say it.

Wait a minute…

No, seriously, I think the question is one of scope. I think on a microscopic level I do the things I do because I am what I am. Obviously my neurons fire the way they do because they’re reacting to their chemical and sensory environment, well, the way they do. Very deterministic. No choice in the matter.

On the other hand, the thing that I call “me” has influence over the way my neurons react. I can decide, for example, to react to something positively or negatively. Maybe that decision is predetermined in a micro sense–I gotta be me. But that decision affects my brain chemistry, so that in the future I’m more likely to make the positive reaction, and more likely again, and so on.

So what you can do is imagine the you that you want to be, and then be that. Sure, maybe you’re doing it because I suggested it, and you had to react that way because that’s just where your brain was at the moment. Who cares? Just let yourself do it. Let yourself make the decision you want to make, not the one someone told you you have to make because God or the Universe forced you to.

In that sense, in the sense that you are free to let yourself think the way you want to, regardless of what other people tell you, you DO have free will.

So what if it’s all just neurons? They’re your neurons, goddammit!

Just a thought.

How? What’s this distinction between thoughts and actions? From the perspective of physicalism, a monistic thesis, there’s no distinction.

Hold on. If I want to evaluate the accuracy and robustness of my evaluation engine, how do I do that? And if I can’t, on what basis, if any, do I trust the evaluations I perform of others, like, say, a calculator?

Well, different processes that arise from a physical system can have different degrees of voluntariness. Heartrate is involuntary (for most people), breathing rate is semi-voluntary; reflexes are involuntary; etc. So there is no problem in principle with beliefs being less voluntary than actions. The primary evidence for this being the case is just introspection. Can you choose to believe either that China is in Asia or that China is in Europe? No; you have no choice in what you believe. You can pretend to believe that China is in Europe, but you can’t really believe it. Actions, on the other hand, seem to have a higher degree of voluntariness.

The standards are to some degree arbitrary (although they are related to the function or purpose of the object in question). But function is also somewhat arbitrary: is the purpose of a car to get you around safely and efficiently, or to be fast, impress all the chicks, and make other guys jealous of you? That will alter your evaluation of whether a car is performing well or not. The case of the calculator is much more straightforward: we created a calculator for the purpose of doing math problems for us, so it is good to the extent that it reliably gives us answers that are true.

I think I know what question you will ask next, but I’ll just wait for it.

Well, that’s your involuntary belief :stuck_out_tongue:

My question still stands. Re-presented as an illustration:

1)I believe 2+2=4
2)Calculator’s purpose is to perform accurate results of calculations
3)I plug in 2+2 and get, say, 5

So I pronounce calculator defective.

But how do I know that (1) is actually true. And if I believe it, how should I justify it?

The point (at least the one that the others are making, that I’m struggling to accept) is that “me” is an artefact of those neurons, so the “me” that has influence over them isn’t any more free to act than they are. There’s feedback in the system, lots of it, but still all deterministic, or random.

Well, yeah. But so what? You are still you. The fact that you have to be you doesn’t really change anything. In fact, it’s sort of obvious.

What I’m getting at, I guess, is that folks who talk about free will are often, if not usually, trying to convince you to think their way, whatever that way is. If you give in to them, then you’re giving up whatever free will you had, be it real or illusory.

If you give in once, then you condition yourself to give in again later, and it becomes harder to think for yourself. So you have some control over the way you think, in the sense that you can choose to think for yourself, or let others think for you.

Thinking for yourself may be fatal in some deep philosophical sense. Letting others think for you may be just plain fatal.