Free Will versus Determinism

To account for chimps, we need to consider the historical specifics of how the environment plus random genetic mutations steered the course of evolution. In a chimp, matter has been shaped by evolutionary principles – we might justifiably call them “forces” – that are causally autonomous, even though they arise from more fine-grained phenomena. To complain that such “forces” cannot magically direct the blind interactions between particles is to fundamentally misconstrue what causation means. The evolutionary explanation for chimps is not a higher-level explanation of an underlying “chimpogenic” physics – it is the proper explanation. […]

Yes, accusing others of using magic is not good at all. It is a very bad and fallacious argument.

Again, you seem to be defining free will as something like “having lots of choices”. That’s not standard at all.

It’s a (usually implicit) pre requisite that an agent needs to be presented with more than one option to exercise free will, sure, but beyond that, these concepts are at right angles to each other.

Or, to put it another way: if this is how we’re defining free will then it trivially exists. No debate is, or ever was, necessary.

Nope, even then there are lot of restrictions when we (not all) are well educated and are aware of what society can do to you if you try to “have many choices”

Close, at a personal level it is not trivial, when looking at the whole world, it is indeed trivial. Unless a narcissistic dictator takes control of it. :slight_smile:

Just to sum what others have said better: this is a series of assertions. As such they can be refuted by other assertions. Repeating them endlessly is also not an argument.

The free will vs. determinism argument is often presented in terms of good and evil or, in Strawson’s use, “moral responsibility.” I’m always leery when presented with such arguments. “Good” and “evil” are difficult semantic terms that appear to be dictionarily definable but actually encompass large and foggy domains. Moreover, I believe they are second-level, maybe third-level, processes that emerge from the first-level processes of consciousness, sapience, and memory in brain function and from whether these are free or determined choices. Most philosophical arguments of this sort are semantically rocky. Perhaps the worst is the ancient ontological proof of God, which though seriously taken by some serious people always founders on putting a standard term from language as an equivalent for god or god-like. Without a firm universal concept of god the argument is necessarily circular.

And that brings me back to Strawson. His infinite regress reminds me of the difficulty people had in understanding how a computer works. If entering standard language as a command is interpreted by a high-level program which is interpreted by an assembler-language program which is interpreted by a machine-language program, what interprets the machine-language program and how does the computer ever start?

The answer is that an outside force created a beginning point: code buried into hardware. Humans also have a beginning point. An outside force - a mother with the cooperation of a father - creates the hardware - cells - that will develop the higher level functions, gradually over time. Humans are not created with principles of choice and therefore there is no necessity to posit existing principles of choice as precursors to the later principles of choice.

None of this discussion involves what I call first-level processes, so the question of where and how the first principles of choice appeared is irrelevant. Fortunately, because we simply don’t know.

That’s a straw man definition of free will. The past not determining the future doesn’t mean that it does not affect the future. If an entity can make a limited number of choices, based on history and the current situation, then its actions are not determined.
The open question is whether the choice between these limited number of options is truly free, but limiting the choices does not rule out their being free.

Anything a living entity can do a computer could also do, I think, so introducing vitalism truly muddies the waters. At the moment the only things we know of that can make choices are living, but that is not a requirement for choice.

To be clear, I mentioned vitalism to be illustrative of a wholly obsolete model.
Vitalism once seemed plausible before we started to form an understanding of biochemistry. And, in a way, it wasn’t falsified, it was just totally unnecessary as the explanatory gap was sufficiently filled already.

Similarly with free will. Prior to neuroscience, maybe it sort of made sense (though less so, in my opinion, as it was incoherent from day 1). But it’s now in a similar position of being a non solution to a non problem.

BTW, if you refer to the neuroscience with the Libet experiments:

Libet states his position in the following
manner:
I have taken an experimental approach to the question of whether
we have free will. Freely voluntary acts are preceded by a specific
electrical change in the brain (the “readiness potential,” RP) that
begins 550 msec. before the act. Human subjects became aware of
intention to act 350–400 msec. after RP starts, but 200 msec. before
the motor act. The volitional process is therefore initiated
unconsciously. But the conscious function could still control the
> outcome; it can veto the act. Free will is therefore not excluded.

And one of my points is: It is hard to find something more incoherent that human nature.

Well, it’s rather the other way around, though. Strawson’s aim is to articulate a particular problem for the idea of moral responsibility, and he gets there by way of an attack on free will. It’s just that in doing so, he frames an argument against free will that works independently of ideas of right and wrong, good and evil, or the like, and that I think gets at the core of what people try to get at when speaking of free will as an inconsistent concept. And obviously, granting Strawson’s premises, his conclusion follows necessarily—his argument is logically valid.

But I don’t grant point 5, which makes point 9 unfounded.

Principles of choice are clearly an emergent property. They do not need to be - and are not - fully formed from the start.

I don’t see that that’s necessarily in conflict with the fifth point. The principles according to which a choice is made need to be in place when that choice is made, but that doesn’t entail anything about how they come to be—although if they came to be in some process that is not at least partially determined by the agent, then the resulting action won’t ultimately be free.

I do not understand how to read points 6-8 without their entailing how principle of choice came about. I’d appreciate your explanation.

OK, so here’s my reading. The points now in question are:

  1. But then to be truly responsible, on account of having chosen to be the way one is, mentally speaking, in certain respects, one must be truly responsible for one’s having the principles of choice P1 in the light of which one chose how to be.
  2. But for this to be so one must have chosen P1, in a reasoned, conscious, intentional fashion.
  3. But for this, i.e. (7), to be so one must already have had some principles of choice P2, in the light of which one chose P1.

Point 6. doesn’t seem, to me, to necessitate anything about how one might choose P1, just that one must be responsible for this—which, if denied, would make any choice ultimately a consequence of something external to the agent, and thus, not free, in that sense.

Point 7. then delineates what it is to make such a choice—which is just what it is to make any choice: I weigh the options, and pick one. That’s not really saying anything about how this ‘weighing the options and picking one’ comes about, or what constitutes it—it’s just the surface level phenomena. Otherwise, it would perhaps be talking about neuron firings, or disembodied spirits, or computations; but it’s neutral as to what the substrate is from which this stems. Again, this seems (mostly) difficult to deny: if there is no choice of P1, then it’s hard to see how an agent could be responsible for its nature. (I would push back on the notion that it needs to be conscious deliberation, though: ideas that pop spontaneously into my head, stemming from some unconscious process going on in the back, are still relevantly my ideas.)

Finally, 8. then just says that any choice has to be made on some basis for that choice—while again being neutral on what such a thing might consist of (memories, experiences, programmed imperatives, spiritual necessities…). If one had no grounds for choosing, then the choice made, if such a thing is possible, would just be random, and hence, not due to the agent in any relevant sense.

I can’t tell you whether free will exists until the term is defined. I suspect that by some definitions it does exist, by other definitions it does not, and by others the concept is nonsensical.

I can confidently assert that a process of human decision making exists, though its form varies and the slower variety isn’t always invoked.

I’m not familiar with information theory (or entropy), but I understand that probabilistic and statistical approaches model randomness: they don’t unequivocally state what it is. Also, some define uncertainty as a third thing entirely - a form of randomness where the underlying probabilities are not known. (That concept may or may not be nonsensical - I’m not sure, and neither is Brad DeLong. Cite.)

Yeah I’ve had to speak on that a few times in in-person debates, because of my neuroscience background. But, I don’t find it all that interesting in terms of the free will debate.
Because, firstly, I wouldn’t define “me” as just being my conscious mind. And secondly, I have no issue with decision-making having stages, where the last stage (of “rationalizing”) has the illusory feeling of being where the decision is initiated.

Uh, the point was that when what you do was in the end vetted by your “frontal lobe”, what emerges is what was not excluded: a choice with likely a bit of Free Will at the human level.

This is a case where the order of the factors does not matter.

I can’t really parse that sentence, but I’ll just reiterate: for the purpose of the topic of free will, I don’t really care at what point in the neural processing sequence a decision is made. It seems to make no difference either to “libertarian” free will, or either of the definitions of free will suggested in this thread (1: That free will is about having lots of options, 2: An infinite sequence of self-determination).

But what is only avoiding what the Neuroscientist concluded. What the brain does before we are aware of a choice is not really imaginary. Neither is what happens when a choice has to be made based on input that we are not aware yet. Free will can still take place.

As for this:

  1. not what I’m saying, free will can be about having restricted choices.

  2. Neither that. The layers are limited; this is not infinite. The #2 argument does look like an infinite straw man though.

You’re suggesting I’m ignoring the conclusion, then proceeding to say exactly the same thing I just said.

I said it’s irrelevant at what point in the neural sequence where a decision is made.

You’re saying it’s irrelevant and that “free will can still take place”. So we’re in agreement, except that I think free will is a meaningless concept, i.e. the original discussion before Libet was brought up.

But you have brought up, repeatedly, the idea that having more choices means having more free will eg post 45.
If all you are saying now, is that free will needs at least two options to be exercised, but number of choices is otherwise besides the point, agreed – that’s what I’ve been saying all along.

Calm down.
I was summarizing the definitions given in this thread, not summarizing things that you’ve said. The definition I’m alluding to here is from post 75.