Free Will Revisited - Soft Compatibilism (ver. 2.0)

This is a reworked version of a soft compatibilist model for addressing the problem of free will vs. determinism which I posited in a thread last November. The thread didn’t get much traction, but a few posters objected that I hadn’t explained how the model could work. Since then, I’ve done more reading, have given the matter much thought and may have found a way to meet the objection. My current thesis goes like this. It seems to me that neither libertarian free will nor determinism adequately describe how the mind works. Rather, introspection and observation suggest we generally have volition, i.e., the ability to direct our behavior and make decisions without benefit of a pre-defined decision tree, but its exercise is influenced and constrained by various factors, including genetics, socialization, personality and life experience. We can best make sense of these competing observations, I think, by viewing volition as an emergent property of the brain. To be clear, I’m a materialist, not a dualist, so I’m not talking about any sort of magical ghost in the machine.

Let’s start with libertarian free will. It ignores the evidence that decisions don’t happen in a vacuum. What we do is influenced by many factors, including (as mentioned) genetics, socialization, personality and life experience. Moreover, LFW has a hard time explaining things like alcoholism, depression, homelessness and senility. On the other hand, determinism conflicts with our ordinary experience of how we make decisions. Determinists assert this is an illusion, but that’s a philosophical argument, not a scientific one. Further, determinism has a hard time explaining important domains of human action like creativity and language. Also, there’s the evidence of socialization. Young children have very little impulse control. As they mature and are socialized, they usually acquire the skill. What they are learning, it seems to me, is volition, i.e., the ability to direct their behavior. A similar analysis obtains for things like OCD, ADHD, alcoholism and violent temper. In each case, what we strive to achieve (by therapy or otherwise) is impulse control, i.e., restoring volition. Finally, it’s interesting to note we’ve had no success (so far) modeling human thought on computers. Computers, of course, are deterministic. That we can’t get them to think (in a human sense) suggests thought is a different sort of thing.

One path out of this thicket is traditional compatibilism. See, e.g., Daniel Dennett, Freedom Evolves (2003). He argues that determinism is compatible with free will if we view free will merely as the sort of guidance control necessary to assign moral responsibility. This is different from the libertarian view, which is that the will is actually free and not determined by causal necessity. (As an aside, I heartily wish Dennett and other compatibilists would use another term, e.g., responsible agency, to describe the sort of free will they’re defending, but apparently the equivocation has become entrenched in the literature.) The problem with Dennett’s view is that, under the rules of philosophical reasoning, guidance control is insufficient to establish moral responsibility since the system itself was supplied determinatively. See the Stanford Encyclopedia of Philosophy article on Compatibilism (warning: this is a long article). Notably, the Stanford article surveys over a dozen traditional compatibilist theories and finds similar objections to all of them.

My path out of the thicket is a little different. Hence soft compatibilism. I take a pragmatic rather than philosophical view. Instead of black or white (or both at the same time), I propose it’s shades of grey. Which is to say, we’re neither the automatons suggested by determinism, nor the god-like autonomous beings suggested by LFW. Instead, we’re evolved creatures doing out best to get by in a complex world. Viewed this way, volition is an adaptive mechanism which gives us more flexibility to respond to problems than genetics alone can provide. In this regard, it’s important to remember that the brain is a naturally evolved system. Unlike Watson and Big Blue, the famous computers which can play Jeopardy and chess better than even the best humans, the brain wasn’t designed or programmed by anybody. Rather, it’s a product of natural selection. One of its great strengths is how well it handles novelty. Solving the problems of the hunt, for example, didn’t depend on genetic or instinctive mechanisms. Instead, we could figure them out and create strategies on the fly. And could change them from generation to generation as circumstances changed. This is an important reason why we were a successful species, even though only modestly endowed with strength and speed. It also explains why we were successful across a diverse range of environments, from jungles to polar regions.

What could make this work is the notion of emergent properties. Emergence is a form of modeling, where a complex system displays characteristics which can’t be readily explained or predicted by the operation of its constituent parts. The schooling behavior of fish is a simple example. Consider language. We start with a genetic capacity: vocal cords, eyes, ears and speech processing centers in the brain. As we’re socialized, we’re exposed to language and learn to use it ourselves. Along the way, we pick up a vocabulary and grammar rules. Many of us even learn to think about how to use language to be effective speakers and writers. It’s an extended process, but at some point we may be said to have a facility, a module if you will, which enables us to formulate and express ideas. Is this facility or module deterministic? In a sense, but not as a practical matter. Rather, it seems to me an emergent property. Obviously it was formed by a set of inputs, genetic and social, but in use it goes beyond those inputs. The whole is more than the sum of its parts. It’s grounded but self-ordering. Something independent and supervening (in the ordinary sense of the word) is going on when we use language which goes beyond the inputs. Not free of them, by any means, but not easily captured or explained by them either. Nor is language the only thing of which this can be said. It can be said of the skills of a carpenter, scientist, teacher, cook or artist. Indeed, I would say the hallmark of just about everything we learn is that the whole is more than the sum of its parts, i.e., that what we develop is a set of skills we can apply to novel problems. Decision making can be similarly viewed as an emergent facility or module, like language and carpentry.

Importantly, emergence in the sense I’m using it here (the form generally accepted by scientists and philosophers) doesn’t claim emergent properties can’t in principle be explained or predicted in a Laplacian sense. It’s just a model, a useful way of describing an often inscrutably complex system. By contrast, there’s a stronger view of emergence which claims the system isn’t reducible even in principle, but this view has few adherents and seems unable to identify a system which meets the claim. For a general discussion, see Wikipedia; for a more technical one, see the Stanford Encyclopedia of Philosophy; see also Baas & Emmeche (emergence as explanation), Boogerd et al. (emergence in biology), Ellis & Larsen-Freeman (emergence in linguistics) and Emergence (2008) (collecting essays by various philosophers and scientists) (note: all these except Wiki are lengthy, dense and probably of interest only to hard core enthusiasts). One key insight of emergence, especially as viewed by scientists, is that reductionism doesn’t work in reverse in most cases, i.e., we usually can’t as a practical matter derive the macro from the micro. The world is too complicated. In such cases, it’s the observable macro rather than the in principle micro which matters. My “take away” is this. We, as human actors, operate at a macro level where volition is an observable emergent phenomenon, albeit subject to macro-level influences and limitations. I submit we may legitimately use this observed volition as a basis for assigning moral responsibility, recognizing there are cases where the assumption breaks down.

Notice that, while I’m arguing for volition, I’m also arguing the mind isn’t as free as is commonly assumed using the LFW model. Real and observable constraints like personality disorders, addiction and senility sometimes keep people from acting with the sort of free will upon which our system of moral responsibility is grounded. If this is correct, there’s often something unfair in how we judge people, legally and socially. That to me is the important implication of my model, much more important than the abstract conundrum of emergent volition vs. Laplacian determinism.

I realize this is long. It’s a complex subject. Thoughts?

A dance between sustaining and being sustained by the forms we create, and being created by those forms, andmoving outside those forms. Human consciousness is dynamic, subjective, and unmappable. (Isn’t it?) It also is not limited to the mind. The body is part of our consciousness and participates in learning, knowing…
Seems like even the highest intellegences must be beholden to subconcious influences of which they are not aware, and volition can spring from desires created by these influences. So how free are we? If we were fully conscious of all our motivations, where would there be left for us to grow into? For instance, a genius might spend his life at an endeavor which is aimed at surpassing his father, or a rival, making intellect god, making choices based on that. Then he learns that love trumps intellect, connection is more primary than competition, and his choices change. Yes, I guess your saying an emerging system will always be incomplete?
“Time is in estrus.” Talking Heads.

The freest will to me is that of someone who puts another’s well being before their own. How free is a will which emerges from a crucible of greed, lust and gluttony?
All of which I’m currently up to my eyeballs in.

Gurjieff would call me an “automaton.”

Did you see the Time cover story about “the Singularity?” “2045: The Year Man Becomes Immortal.” Feb.21, 2011. “The accelerating pacy of change…and exponential growth of computer power…will lead to the Singularity.”
Freaks me out b/c, you see, I believe we as a species are only as strong as our weakest link. I think all these smart ones have to slow down a little for us slow ones. This is where intelligence becomes a moral issue, imo.

As per your last paragraph, I don’t think the will of the average guy is a whole super lot freer than the mentally compromised individual. Freer yes, just not as much as black and white.
The issue of will in connection with addicts is an interesting idea. I was an addict, now I’m not, and my will is just as bound by ambition now as it was by drug behavior then. Again, I think that the zenith or maybe even only true definition of “free will” is self sacrifice.

Plus, I believe that to a large extent our biology is our destiny.

"One key insight of emergence, especially as viewed by scientists, is that reductionism doesn’t work in reverse in most cases, i.e., we usually can’t as a practical matter derive the macro from the micro. The world is too complicated. In such cases, it’s the observable macro rather than the in principle micro which matters. My “take away” is this. We, as human actors, operate at a macro level where volition is an observable emergent phenomenon, albeit subject to macro-level influences and limitations. I submit we may legitimately use this observed volition as a basis for assigning moral responsibility, recognizing there are cases where the assumption breaks down. "

Not sure what you mean. Can you give some kind of example? I have so much trouble with the abstract…
“…we usually can’t as a practical matter derive the macro from the micro. The world is too complicated…”
Do you mean, the outcome (macro) can’t be predicted by the individual circumstances (input, data, programming, choices), the micro? Like, just cuz a kid pees the bed, starts fires and tortures animals, we can’t predict that he will grow up to be a predator?

Emergence is a powerful notion, but I’m not sure how it’s supposed to aid you in explaining the origin of your idea of volition… It’s true that emergent properties are ‘not present’ in a sense on a fundamental level, but nevertheless, the fundamental level dictates what properties emerge (since you’re not arguing for strong emergence, anyway). Consider a picture, composed out of individual pixels, each of a certain colour and position – the pixel-level description does not contain any idea of what’s present on the picture-level (a tree, a house, a car), but nevertheless, it suffices to completely determine the picture’s context – if the pixel-level is fixed, there is only one possible picture that emerges.

In this light, I don’t see how ‘volition’ (in the sense that there exists genuine ‘wiggle room’ within which to make actual choices) is to emerge from deterministic fundamentals, since, if the fundamental level is deterministic, and the specification of the fundamental level serves to uniquely specify the emergent properties (i.e. we’re talking about weak emergence), strict determinism must also be obeyed on the macroscopic level.

This isn’t to say that the concept of volition isn’t useful – in describing the behaviour of intelligent agents, it simplifies things greatly to talk ‘as if’ the agent does things out of their own volition, because of its free choices. This is essentially what Dennett calls the ‘intentional stance’, which you’ll probably be familiar with.

However, in this form, on the level of an ‘effective description’ to be made practical use of, since the complete, microscopically exact description is in general intractable, I don’t see how your concept of ‘volition’ is supposed to bring anything new to the table regarding the question of whether or not we have free will, however that may be defined. That question is still an in-principle question, decidable only on the level of fundamentals: if we don’t have free will, i.e. each of our actions is completely determined by causal forces, but the description is simplified on an effective level if we assume we do, then it’s still the simple fact of the matter that we don’t have free will.

Not necessarily – if determinism alone explains human behaviour as well as determinism + volition does, scientifically, one would prefer the former over the latter; it’s what Occam would do. And there’s certainly as yet no definitive reasons to prefer one over the other – you pointed to problems with getting computers to ‘think’, be creative, etc., but actually, the successes we have in that field suggest that this is merely a problem of degree, not of principle; an issue of quantity, rather than quality. Computers certainly can be creative, for instance – they’ve even invented toothbrushes, deduced Newton’s laws from experimental data, written original music (or have a go yourself here), etc – all hallmarks of what you’d call ‘creativity’ in a human.

Of course, there’s also the matter that, to all appearances, the human brain is a deterministic apparatus – neurons fire upon a specific set of conditions being met. If one neuron can be simulated – which it can – there’s every reason to believe that a collection of them can be, too; so if one wishes to avoid dualism, it’s hard to maintain a firm divide between human and artificial brains.

And just as a side-note, computers even exceed your limit of being predictable in a Laplacian-Demon kind of way – due to the undecidability of the halting problem, the best even a Laplacian demon of neigh-unlimited faculties could do is to explicitly simulate (most) programs in order to find out what they do – which is just equivalent to running them, waiting to find out. Of course, that goes for the human brain, as well.

Can’t I drug and torture and Stockholm Syndrome a sleep-deprived guy into such a state, with the suggestibility and the brainwashing and et cetera?

Why SURE ya can!

I can’t agree with this. It is my understanding that there are many aspects of the brain that aren’t well explained by deterministic properties and there are theories that suggest that consciousness is a quantum process. I’m not particularly versed on the theory, but I think it’s called quantum mind hypothetis or the like.

Anyway, if this is true, it means that we actually cannot model a collection of neurons artificially without a quantum computer and ought to allow for a materialistic explanation without resulting to dualism.

There are some models of quantum consciousness, the most popular perhaps being the Orch-OR (for ‘orchestrated objective reduction’: basically, the mind somehow chooses a state for the quantum wave function to objectively collapse to) theory due to Penrose and Hameroff, but they don’t have many adherents – generally, processes in the brain happen on scales too large, warm and wet in order for quantum mechanics to play much of a role; decoherence is estimated to occur much too fast in order for quantum effects to have any influence.

Also, in general, even quantum processes can be simulated on classical computers – quantum computers can’t compute anything classical computers are unable to, they just can do so faster, in some cases (though I believe this isn’t actually true in the specific Penrose-Hameroff model, as Penrose’s idiosyncratic interpretation of quantum mechanics makes it possible that the Orch-OR process leads to physical hypercomputation, i.e. the capacity to compute functions uncomputable by classical, Turing-machine equivalent, computers – though this interpretation, which poses an objective, gravitationally mediated collapse of the wave function, is controversial in itself – I don’t think anybody but Penrose himself really subscribes to it). Of course, wave-function collapse, in whatever form it may occur, if it does, can only be modelled pseudo-randomly, but I don’t see that this changes matters much.

Nearly every time I read a synopsis of Dennett’s compatibilism, I’m amazed that people leave out what are, to me, incredibly important parts of his argument. Most notably here, Dennett argues that determined does not mean caused. He discusses causal necessity and sufficiency, and gives examples of events without causes in a deterministic universe.

He also investigates whether appeals to determinism really answer the questions many people think they’re answering. It’s been some time since I last read the book, but as I recall his argument goes something like this. Incompatibilist philosophers have brought up the condition of “could have done otherwise” as being necessary for free will. They ask, “if we set up conditions precisely as they were and set events in motion again, could things have happened differently?” Dennett argues that this “precisely as they were” is the wrong thing to look at–that answering this question doesn’t give the information that we actually want.

As I recall, he gives a thought experiment trying to determine which of two chess playing computer programs is superior, by pitting them in matches against each other. As I recall, the game came to a point where Program A was to make a move: if A were to castle, it would go on to win the game, but if not, it would lose the game. Program A did not castle. The quest is: could Program A have castled? If you consider conditions “precisely as they were,” the answer is no–deterministic programs will behave in exactly the same way if you put them in exactly the same circumstances. But answering this question doesn’t tell us anything about whether A or B is the superior program. What we really want to know is whether A’s failure to castle is indicative of a shortcoming in its program. Watching how a single game plays out, even if we look at all the microscopic details, simply doesn’t tell us this. We have to take higer-level things into consideration to arrive at an answer about why A failed to castle.

I’m not going to use the quote function, as I find it tends to clutter up threads, but I’ll include links to the posts to which I’m responding so interested persons can easily pull up the language again if they like (I’m assuming they’ve already read it once). Also, I’ll try to respond to everything I think calls for a response, but if I overlook something you think is important, please bring it up again and I’ll catch it in the next round.

zephyr9 (Post #2, #6 and others): I’m not sure what you mean by “emerging systems will always be incomplete.” If you mean actually incomplete in an objective sense, no, that’s not my sense of it. If you mean our understanding often will be incomplete, yes, I agree. As for an example of reductionism not working in reverse, weather would be a good one with which we’re all familiar. We can model it only approximately and even then not very well.

Half Man Half Wit (Post #7): There’s no question but that weak emergence (or what I prefer to call ordinary or descriptive emergence) presumes determinism in the ultimate sense. I said so in the OP. That’s one reason I call my model compatibilist (the other being the observable constraints on volition at the macro level). So, yes, behavior is ultimately the product of total brain state (and I agree this is usually viewed at the level of neurons rather than atoms or molecules). For some, that’s the end of the inquiry. I go further, arguing there are lots of behaviors (e.g., language) which suggest the brain is self-ordering in the sense we associate with agency and control. FWIW, the examples of computer creativity you mention (discussed in a thread a few months ago) were part of the inspiration for my revised model. Returning to humans, how does your view of determinism explain things like impulse control and use of language? Also, what’s your view on the implications of hard determinism for moral responsibility?

Dr. Love (Post #12): Be fair. If I’m trying to summarize a 300+ page book in one paragraph, I’m necessarily going to have to pass over a lot of detail. As it happens, I did notice Dennett’s discussion of causation, but you’re misremembering it slightly. What he argues is that a coin toss although technically determined is in effect random, not that it’s literally uncaused. Indeed, this is what got me started in the direction of emergence, as that’s essentially what he’s arguing and I was surprised he didn’t develop his thesis along those lines. As for the chess program thought experiment, I have to confess I don’t find it illuminating. Whatever the mind is, it’s not comprehensively programmed in the same way.

I don’t expect every detail to be mentioned but, in my reading, the related discussions on causation and the type of information supplied by determinism are the most important in the book. I see a lot of things in your OP that are similar to Dennett’s arguments, but his arguments about causality are what justifies the use of “free” in “free will,” and thus his attempt to go beyond your suggested term of “responsible agency.” Within this thread, Half Man Half Wit asks where the “wiggle room” is supplied in order to make real choices. I believe that the answer is in Dennett’s discussions.

Also, after getting out my copy of the book, I think you’re misremembering his arguments. There is a section of chapter 3, beginning on page 83 called “Events without Causes in a Deterministic Universe.” On page 84 he asserts “In fact, determinism is perfectly compatible with the notion that some events have no cause at all.” And on page 85 “A coin flip with a fair coin is a familiar example of an event yielding a result (heads, say) that properly has no cause.” (All italics are his.) He is trying to justify “free will” by arguing that the important element in freedom is that the will is uncaused, not undetermined.

I initially read the title of the thread as:

**Free Will Revisited - Soft Cannibalism (ver. 2.0) **

Dunno where my head is at. Carry on!

PBear42 and Half Man Half Wit, you both talk about determinism “explaining” things. It’s not clear to me how determinism (which hasn’t been defined in this thread) by itself can explain anything. Can you clarify what it means that determinism explains something?

PBear42: I often see compatibilists and hard determinists claim that “[Libertarian free will] ignores the evidence that decisions don’t happen in a vacuum. What we do is influenced by many factors, including (as mentioned) genetics, socialization, personality and life experience.” Yet, I don’t think I’ve ever heard a free will libertarian claim that decisions must not be influenced by personality, life experience, etc. Can you cite a libertarian who actually believes that?

You aren’t the only one.

Dr. Love, thanks for your comments. This the sort of push-back I was hoping for in the thread. I’m trying to figure something out.

Libertarians. Most of what I know about libertarian philosophers I’ve gotten from secondary sources, including Wiki (linked in the OP), the Stanford Encyclopedia of Philosophy and discussions in various compatibilist articles and papers. See also Peter van Inwagen (a free will incompatibilist), How to Think about the Problem of Free Will. Notice I didn’t claim libertarians assert decisions must not be influenced. I said they ignore the influences. I think that’s true. I also think that when we give those influences their appropriate weight, we end up with a form of compatibilism.

Determinism. As for a definition, in what was already a long OP, I opted to define it by linking to a Wiki article. I did the same thing for the same reason with LFW and compatibilism. Broadly speaking, determinism is the thesis that everything we do is caused by our gene code and life experience. As for what it means to say determinism explains anything, I actually have the same question. Since the universe of determinants is vast and complicated, what we get in most accounts is an appeal to Laplace’s demon. At best that’s a description, not an explanation. And, I argue in the OP, determinism without an appeal to something like emergence has a hard time explaining lots of observed behaviors, e.g., language.

Dennett on causation. Reading the passage yet again, I see where you’re coming from. If that’s what Dennett intends, he’s mistaken. Or playing word games. Bear in mind what we mean when we say the coin toss is determined. How many times it turns is determined by the amount of force applied, etc. And, yes, the sum total of these factors is so complex we can’t predict the outcome. Fair enough, but that doesn’t make it actually uncaused. Moreover, applied to human behavior, this implies we’re random, not that we have agency or control. Dennett explains why random isn’t free will in Chapter 4, discussing Robert Kane’s quantum indeterminacy theory of LFW.

PBear42, I think I’m unclear as to what question your concept of volition answers. What’s the reason for introducing it? What can be explained through volition, that can’t be explained (at least in principle) without it? And if there is nothing necessitating volition as an explanation, then in what sense do you propose that volition is something that exists in the world, something humans possess? What would be the difference between a human possessing volition, and a human who doesn’t (a v-zombie?)?

Impulse control is just an addition to the set of constraints determining behaviour. Before some form of socialization, the actions are as completely determined as afterwards, only after socialization has occurred, the determining constraints are different. Think of a river, flowing in its bed: its bed, if sufficiently firm, completely determines the way it flows; but that doesn’t mean I can’t alter the river’s flow, if I, say, carve a new bed, or dam the old one. Determinism doesn’t mean that behaviour is immutable, just that the way in which it changes is itself completely determined.

As for language, I’m not sure why determinism should be a problem there. Fundamentally, words are just parts of the world that do things with us, so one speech act in another person may cause us to speak ourselves. This all got started through an evolutionary process, presumably: grunts and other noises, evoking a ‘hardwired’ response (such as the ‘warning’ calls of prairie dogs), underwent a gradual complexifying process, at the end of which stands the sophisticated means of communication we make use of now, to grossly oversimplify things.

Moral responsibility is of course a thorny subject, but it’s so quite apart from the question of whether or not we have free will. If the world were indeterministic, what could we then point to as a cause for amoral behaviour, for instance? That behaviour might have ‘just happened’, so how can one assign blame based on this?

One speculation might be that the concept of moral responsibility is an adaptation to enable survival on a societal basis; that societies without the concept were less ‘fit’, and thus died out – one could easily construct a narrative to that effect; certainly, societies that don’t punish murder, were everyone goes around merrily killing one another, won’t tend to be very stable.

This, I don’t get at all. The result is determined by the throw; the precise environmental conditions, through influencing the coin’s trajectory, cause either heads or tails to show up. How’s that an event that has no cause?

You get sufficient reason for an event’s occurrence. In an indeterministic setting, what happens does so without reason – it does not happen necessarily, but incidentally. In determinism, everything that happens must happen, being uniquely constrained by some set of causal factors. Those causal factors are the reason why something happens – they thus explain its happening. Such an explanation is unavailable in an indeterministic universe, which is why it’s a hugely unappealing notion to me, on purely aesthetic grounds alone.

I have two definitions to discuss: determinism and causality. IMO, the discussion on causality depends on the definition of determinism, so I talk about determinism first, and assume that we’re in agreement when I turn to causality. I’ll have to deal with disagreements later.

Unfortunately, this isn’t the definition that’s ever used, as far as I can tell, in debates about free will. In such a debate, determinism is a property of physics, and the debate is over the consequences of determinism in physics.

This is a reasonable start, but I think we’re missing some important requirements here. I’ll state what I think these missing requirements are:
[ol]
[li]Determinism is a property of systems. If we use the definition that “every event has a sufficient reason for occurring,” then I will argue that the “sufficient reason” must also lie inside the system. This clause ensures that only closed systems can be deterministic. It also reveals two different kinds of indeterminism: (1) where the sufficient reason is (at least partly) outside the system and (2) where there truly is no sufficient reason.[/li][li]The sufficient reason must be true/occur before the event. That is, the “cause” (it’s quite hard to avoid using that word) of any events must have occurred before the event did. Determinism is only required to work in one direct in time (into the future), although some deterministic systems may work both ways. That is, there may be multiple sufficient reasons for an event to occur. In fact, such sufficient reasons might be mutually exclusive.[/li][/ol]
With these in mind, I suggest a new definition of determinism. A system is deterministic if, for any event that occurs within the system, there is a prior sufficient reason within the system.

I would characterize this definition as “backwards looking,” since it begins with an event and looks back in time for a (sufficient) cause. We could compare this with some “forward looking” definitions. For example, according to Peter van Inwagen, a system is deterministic if “there is at any instant exactly one physically possible future.” This, of course, assumes that the system has some “physics” which describes how it changes over time, but I think we can grant this. We might state it more formally as: “A system is deterministic is a complete physical description at time t[sub]0[/sub] of its state suffices to produce a complete physical description of its state at any future time t[sub]1[/sub].”

I believe these definitions are all (nearly) equivalent. The backwards looking definition may have some problems if you’re looking at events that occur in the first instant that a system exists.

Now, if we’re in agreement, let’s move on to causation. This comes largely from chapter 3 of Freedom Evolves, and I will try to summarize as I understand the argument. He argues that the word “cause” is used in different ways when discussing determinism and when discussing nearly anything else.

What does determinism, by itself, tell us about the result (say, head) of a given coin flip? As per the definition above, we know that there is at least one sufficient condition for the result. After a bit of thought, we can figure out the form of that sufficient condition. Because determinism is only true of closed systems, and is only true going forward in time, the sufficient condition must be the complete description of the universe at some point before the coin was flipped. This is not to say that there are no smaller sufficient conditions, but this is the only one supplied by determinism.

If we want to know what causes a head, the only answer so far is “the exact state of the universe before you got a head.” How do we get a better answer? Instead of looking at conditions exactly the way they happened, we should vary the conditions, and see if the coin still comes up heads. Varying the conditions (as in an experiment) or imagining doing so (as in a counterfactual) and seeing what changes is the only way we can investigate causes. We find the causes by finding the similarities in the conditions that produce heads (or tails). If we are lucky, we may find necessary conditions, or simple sufficient conditions. Or we may find statistical necessary or sufficient conditions (i.e., heads is unlikely to occur if X happens, or is likely if Y happens). However, it might be that there are no patterns in the data—nothing connects all the heads-generating tosses other than the fact that they generate heads.

This is what Dennett means by “has no cause,” that there is nothing tying the heads-generating throws together. As I see it, the explanation that it depends on the exact force, etc. of the flip is no explanation at all. All you’re saying is, “Flipping the coin is such a way that it lands on heads caused it to land on heads.”

If I recall correctly, this account of causation gets used in different ways throughout the book, and becomes somewhat more subtle when he begins dealing with agents and other evolved things.