A strictly creationist definition of “species” has a sharp divide as well. This is why the effectiveness of a definition is important. When that definition of “species” could no longer meet the demands we placed on it, we discarded it, and found a new notion that turned out to be sufficiently close to the old one to keep using the same word.
A strictly libertarian notion of “free will” doesn’t meet the demands that we’ve placed on it. Is there something sufficiently like libertarian free will that does meet these demands?
It’s built around expectations, but only proximally. More fundamentally, it’s build around the rejection of some basic assumptions we make about free will and determinism. Some things that are rejected:
[ol]
[li]The validity of the infinite regress argument (without further modifications).[/li][li]“Could have done otherwise.” He argues that this criteria simply doesn’t capture what we we want to know when thinking about moral culpability (an older paper on this topic is here (pdf)[/li][li]“If determinism is true, you can’t change the future!” Somewhat related to the above, Dennett argues that the notion of “change the future” is logically incoherent. If it means anything, it must mean “change the expected future.”[/li][li]That “the past determines the future” is the same as “the past causes the future.” He argues that determinism actually has very little to say about causality. Further, for a decision to be free, what matters is that it is uncaused, not undetermined.[/li][/ol]
In some talks, he summarizes his compatibilism by saying (something to the effect of) “free will has nothing to do with physics.”
In logic design, the “self” consists of internal states. And you are correct - you cannot predict the output of a system unless you know the value of the states also. Believe me, my life would have been a lot easier for the past 30 years if this was not true, and the way you create tests for things like microprocessors is by eliminating internal states, which makes things much, much, simpler.
Not all states are involved. The type of ice cream you ate at 10 might never influence anything you do. On the other hand, the type of ice cream I ate when I was 10 made me chase around a town in Germany looking for something similar.
Adding to the fun is the possibility that some more or less random or chaotic event changes a state, which cuts the deterministic connection between inputs and current states.
In these discussions, “deterministic” usually means that, if you know everything about the universe at some time, you can in principle predict everything about the future of the universe. The future supervenes on the past. With this definition, it’s quite easy to produce something that is neither random nor deterministic.
Read from right to left, you can determine the next number from the previous, by (1) dividing by two if even or (2) multiplying by 3 and adding 1 if odd. Reading from left to right, the sequence is not deterministic. But it’s not random either, except in the Bayesian sense that “random” means “I don’t have all the information.”
Of course, mere mortals such as ourselves probably can’t do much to tell the difference between randomness and non-determanism, but it seems to me that there is a possible distinction.
The first 100 digits of pi, read backwards, qualifies just as well. However, I don’t buy it. Non-deterministic is different from not having the algorithm that generated it. I do agree that it is hard to tell this from really non-determinism at times.
You’re contradicting yourself. We either have “free will” or we are robots bound to deterministic biological processes. You can’t have both.
The problem with your question is that the terms “free” and “will” are vague, as is all common language. So, I would suggest you start from figuring out what your question means to you before you ask for an answer.
I don’t know. While I believe in a soul as the part of a person that’ll live on in the afterlife, I don’t know that it does anything different than our mind does. If it’s not our brain, not defined by our thoughts and personalities, what does a soul do? Asked this, some people will say, “oh, it determines if you’re a good person or not” but our brain does that through what we think of and act upon. I’ll admit that I have trouble reconciling the belief that our souls and minds are intertwined with the idea that people who are not capable of conscious awareness have souls as well.
So…given all of this, I’m not sure how these two types of free will would be different from one another.
I think I view the future as determined by the past, and the will, in a sense, as corollary to this; i.e. the past determines the will, and the will influences the future, but overall, one need not consider the abstract notion of will in order to (microscopically) explain how the world evolves from one state to the next. The will, or more broadly, our consciousness, is just a particular (‘chunked’, as Douglas Hofstadter would put it) way to look at a part of the world, the same a chair is a particular way to look at the world which, at bottom, consists of interacting microscopic particles, on which level the notion of ‘a chair’ never enters. That’s I think part of why I have a problem with your boxing: to me, it feels like trying to do particle physics while steadfastly holding on to concepts like ‘chair’, ‘table’, or ‘pizza’. They’re useful concepts for a description at the macroscopic level, but impossible to clearly define microscopically.
Which curiously is much the same position one might arrive at if one considers there to be no free will at all…
Could you elaborate? I mean, if you think that somebody with your exact same genes, born in exact the same circumstances, living the exact same life, down to every single causal influence, turns out a person different from you, this would seem to imply strict non-determinism, right? Is that what you’re getting at?
Well, I was thinking about the (admittedly purely hypothetical) situation of being able to do a controlled experiment – i.e. set up the exact same circumstances again and again, and have the agent make the exact same choice again and again (say between cookies and brownies, which he likes exactly equally well). In this case, if he chooses cookies every time out of nothing but his own will to take cookies, there’d at least be something to think about.
Of course, this is an experiment impossible to realize; but I’d argue that a universe in which it is ‘possible in principle’ (that ol’ philosopher’s chestnut) is different from one in which it is in principle impossible, such as seems to be the case in ours.
And I agree, in principle. However, the obvious counterargument a dualist would raise is that just because determinism and randomness are an exhaustive set of possibilities in our world, it need not be in the ‘soul-world’, where there could be something else that allows for genuine freedom. But anyway, that’s just a ‘something unknown might do we don’t know what’-argument, which one can’t really meaningfully reason about.
But I think there’s merit in discussing the ‘deterministic but indeterminable’-option. A computer is fully deterministic (to the idealization that everything works smoothly), but its behaviour can’t be a priori exactly determined (halting problem, Rice’s theorem, just to have said those things out loud once, even though it’s understood that that’s essentially what we’ve been talking about already). So if we define freedom based on expectations, such a system would be free – as free as one could get. The point I’m trying to make here (though it’s been implied before) is that usually, the dichotomy between libertarian freedom and deterministic reality is taken to mean that either you’re free, or a sufficiently powerful intellect could predict your every move. However, for sufficiently complex systems, such prediction is strictly impossible even within a deterministic world – i.e. there is no Laplacian demon that, given the location and momentum of every atom in the world, could predict the future with perfect accuracy (even if the world were purely classical). So in that sense, there is only one way to find out how you might choose in a given situation, which is to have you make the choice. The information which choice you would make was previously not present anywhere in the universe. This is a kind of freedom, and I think it’s as much as we’re going to get.
But species are something that exist independently of the definition; i.e. the definition can either fit them, or not (or fit more or less well). Free will refers to a concept that, depending on the definition, might either exist or not.
I agree with most of those (in the context of compatibilist free will), but I don’t understand the last point. How is a determined decision not also caused?
That the future supervenes on the past does not entail that, knowing the past, one can predict the future. As a toy model, knowing a computer program, and all its input, doesn’t mean you can predict its output (that’s essentially Rice’s theorem I mentioned above).
One can easily create an algorithm that produces the numbers of the sequence in inverted order: (1) starting from some n, apply Collatz’ algorithm until you reach 1, (2) invert the order. (Of course, if the conjecture is false, the algorithm isn’t guaranteed to halt for all n.)
There’s a certain view that only subatomic particles really exist. Everything else is just “drawing boxes” in ways that seem to be useful. The question is, is there a definition, that is reasonably close to classical free will, that is useful for the task we put it up to?
As has been stated in this thread, determinism is only true of closed systems. Further, the word “cause” can refer to a number of distinct things. For example, we can draw a distinction between necessary and sufficient causes, among other things. As an example, suppose you want to know the cause of World War 1. I tell you that WWI was caused by two things:
[ol]
[li]The assassination of Archduke Ferdinand* and[/li][li]On June 17, 1011 A.D. my direct male ancestor ate mutton for dinner.[/li][/ol]
Now, you balk at my description. What could my ancestor’s diet on a specific day have to do with WWI? However, I have given you a sufficient cause for WWI*. A sufficient cause along with any other true piece of information is still a sufficient cause. But when you’re asking about a sufficient cause, what you really want is a minimal sufficient cause, such that if you take any information away, you’re no longer left with a sufficient cause.
If you appeal to determinism to state the cause of something, it will only give you the maximal sufficient cause. Appealing to determinism tells you absolutely everything that happened before, and tells you that all of that is the cause. Post hoc, ergo prompter hoc (ok, there’s a touch of hyperbole in that statement).
Determinism is an empirical statement. If you want to know about causality, you need to delve into theory. That is, you have to start asking “what if,” and determine how changing the circumstances changes the outcome. If changing some detail of the past has no effect on whether WWI occurs, how can we say that it caused WWI?
Consider Langton’s Ant (wiki page), that lives in a two-dimensional deterministic world with a very simple physics, but complicated behavior (it is a universal Turing machine). If you run the applet in the first link, you’ll eventually see Langton’s ant produce a periodic structure called a “highway.” What causes the highway to be formed? Since we have a deterministic system, it seems reasonable to say that the initial distribution of black and white tiles caused the highway to form. But when you change the initial distribution, the highway forms again. Indeed, it is conjectured that the highway will form given any finite initial arrangement of tiles. The formation of a highway is determined, by the initial configuration, to occur, yet the initial configuration did not cause the highway to form. That is to say, the initial configuration plays no explanatory role in the highway formation.
Computability theory isn’t my area, so it’s taking me a bit of time to work through Rice’s Theorem, but it seems to me that the “indeterminableness” of Rice’s theorm and the halting problem come from some measure of unboundedness in the problem. This unboundedness asks more of Laplace’s demon than it’s required to do. Consider the following (demonic) analog of the halting problem:
We wish to design a Laplacian demon that can decide whether any given deterministic universe, with any given initial condition, halts (say, succumbs to heat death or enters a steady state). The halting problems suggests that we are destined to fail in our endeavor. However, for the purposes of discussion, we don’t need such a robust demon. We need a demon that can answer the question about this universe, with these initial conditions (or, worst case, this universe and any initial condition). Maybe this is still undecidable, but even in that case, we only have to restrict the demon to answering questions about finite time.
I see no reason why we need to restrict prediction to a priori. “Run the program and observe the results” is a perfectly valid form of prediction for Laplace’s demon.
Consider the reverse Collatz sequence: 1,2,4,8,16. What’s the next number? It can be either 5 or 32. There are situations where you can’t uniquely determine the next number in the sequence, even when the forward sequence is deterministic. Basically, the past supervenes on the future in a reverse Collatz sequence.
For the sake of argument, let’s pretend for a moment that history really is this simple.
Yes. There are always random elements at work in the environment, which would negate strict determinism. You can’t pass this off as “causal influence” because it’s non-deterministic. Every time you roll the tape, some random environmental factor is going to be different.
I agree that that’s an interesting question, but evidently, there’s still discussion on whether or not classical free will exists; and by failing to discriminate between different notions of free will, these discussions are too easily confused.
(And just as a side note, there’s also a view, which perhaps somewhat ironically most particle theorists would subscribe to, that particles themselves are just boxes that are useful in certain situations – namely, those of approximately flat spacetime geometry --, while what’s really fundamental are (quantum) fields. The thing with boxes is that they can be very useful on their respective level of description, but one must take care not to fall victim to any level confusion.)
But isn’t that generally unknowable? Any cause of an event, i.e. anything to its causal past – anything at any point in spacetime from which a light ray could have reached it – may be necessary, and hence, no cause that fails to include it sufficient. In other words, it may well be the case that had your ancestor chosen to go with the fish, it’s possible that WWI wouldn’t have happened, or what happened would have been sufficiently different from WWI that it wouldn’t make good sense to call it that. And in a very strict sense, what would have happened had your ancestor eaten something else would not have been what we call WWI, as the world would not have been our world. A world is just a set of true propositions, and those sets would differ depending on your ancestor’s choice of meal; and with WWI, we refer to the events that happened between 1914 and 1918 in the context of our world.
Of course, that view is much too strict to be very useful, as our world is only known to a certain, limited, and varying accuracy, so there probably exists no entity that could tell the difference between the world in which your ancestor ate fish, and the world in which he had the meat instead. But even then, the question is not what is necessary and sufficient for WWI to happen, but what is necessary and sufficient for us to call what happens WWI (in the context of our world, or any effectively indistinguishable one), i.e. what is necessary and sufficient for us to be unable to distinguish between possible worlds. It’s again a question of boxing.
The point of that digression largely being that all we can hope to know is a maximal sufficient cause; anything we might subtract from this may leave the cause insufficient, at least in principle.
That’s interesting! I would not have thought that an automaton that eventually produces repetitive behaviour could be computationally universal.
Well, I suppose the highway may be thought of as an attractor of the system. A simpler analogy would be that of a pendulum, which always reaches its rest point from any initial position.
Well, that kind of depends on what questions you would expect the demon to answer. It could, for example, predict what configuration a given gravitational many-body system, such as a solar system, will be in after a finite time of evolution, though it could not do so in a way that is in principle more efficient than having the system evolve, i.e. it would have to resort to numerical simulation; but it could not, in general, answer the question whether any specific configuration will ever be attained. And it may well be that questions of interest depend on whether or not some such configuration might occur, which it then could not answer.
This is conjectural – everything we know about the universe is compatible with strict determinism sans random elements.
I think a fundamental component of free will/ determinism dichotomy is the nature of time: is any possible future “manufactured” by the events of the present, or are we fated to experience a single future as set in stone as the past is?
Yes, actually. As noted upthread, quantum mechanics itself is actually perfectly deterministic – if you give me the wave function of a system at some point in time and the system’s dynamics, I can calculate the wave function at any later time (just as well as that’s possible in classical mechanics, at any rate). It’s only when we consider the question how our apparently classical world emerges from all that quantum stuff that true randomness potentially enters, but even then, it’s a matter of interpretation: interpretations that consider the wave function collapse as something real, i.e. that out of all the quantum possibilities, one is actually and definitely chosen and made actual – Copenhagen, consciousness causes collapse, and objective collapse theories (though to what extent those are properly called ‘interpretations’ is debatable) are of that type – typically include a genuine random element, whereas interpretations in which collapse is only apparent, i.e. where there’s no true divide between quantum and classical world, and the ‘quantumness’ is simply unobservable (or very unlikely to be observed) in our everyday experience, preserve determinism – interpretations of that kind are, for example, the many worlds, decoherence, consistent histories, or relational interpretation. Another category is formed by so-called hidden variable theories, in which the quantum description is only a consequence of our ignorance of the true fundamental dynamics; the most famous of those is the Bohmian interpretation, which is also fully deterministic.
There’s a good argument to be made that interpretations that don’t impose some sort of schism between the classical and quantum world are more natural, and preferable from an Occam’s razor point of view, but that would take us too far off topic, perhaps. The main point is that every consequence of quantum mechanics can be explained equally well with a deterministic as with a probabilistic fundamental picture.
That’s got nothing to do with radioactive decay. It is impossible to predict when a given unstable atom will decay. It is non-deterministic. And even with quantum mechanics, all you’re calculating is a *probability *function. That’s not strictly deterministic either. It tells you probabilities of something, not absolutes.
Another, similarly non-deterministic processes is Brownian motion. It is totally a stochastic process.
That doesn’t follow, actually (that something is unpredictable does not entail that it is non-deterministic), and it’s also not true. In hidden variable theories, it would be possible to predict when a nucleus decays given knowledge of the hidden variables; in many-worlds and related interpretations, the nucleus decays at every possible time within the overall quantum superposition, for which the wave function gives you the complete description.
No, you calculate the state of a quantum system as given by a ray in a Hilbert space; the probability of observing a given state is a derived quantity from this. Taking only the ‘quantum world’ into account, the formalism gives you a complete description of the state of the system. In a many worlds ontology, you can envision the probability of a certain outcome roughly as the proportion of ‘worlds’ in which it occurs, where the quantum mechanical formalism describes the entire ensemble of worlds, rather than just any specific one. The apparent indeterminism enters then through the fact that we only have access to the part of the ensemble that is hasn’t decohered from us, our ‘world’.
It’s a chaotic process, but it’s also completely deterministic (though it can be modelled non-deterministically using random walks).
No, that something is **random **does.It’s unpredictable *because *it’s random (as opposed to chaotic), but it’s the randomness that entails the stochastic nature.
Hidden variable theories are *exactly *the same as saying “magic happens!” They’re bunk. Wishful thinking disguised as science. Just like the MWI is unfalsifiable, and so falls out of the realm of science into science fiction. If that’s what you’re pinning your defence of pure determinism on, we don’t really have anything more to say to each other on this matter.
Although, if your argument amounts to “we have no free will … if you include every possible world”, that just doesn’t bother my compatabilism, any more than the Modal Ontological Proof of God turned me into a theist.
No. Brownian motion isn’t chaotic, it’s random, because it depends on the *similarly *random interactions of the particle with fluid atoms, all of which are moving randomly. Pure stochastic motion. Molecules are not Newtonian solids and their interactions are influence by indeterministic interactions. Unless one chooses to introduce the Handwavium of MWI and Bohm (which are far from the majority accepted interpretations, BTW), which all evolved from a stance of “Hey, let’s assume the Universe is deterministic, and make QM fit that, because we think that Albert was right about the dice thing, or something”. If you can’t see the logical flaw in arguing for determinism using theories that assume determinism as a premise, I can’t really help you.
You are really overstating things here. Hidden variable theories must be nonlocal, yes. That’s not quite the same as “magic happens.” De Broglie–Bohm theory, for instance, is not “bunk.” John Bell, for example, certainly didn’t think so.
Also, I think you have a distorted view of the MWI – being unfalsifiable is a red herring – it’s on the same footing as copenhagen et al, and if you wish to bring Occam’s razor into it, the MWI has the upper ground. The MWI is a bit misunderstood – so many people have the mistaken impression that it actually hypothesizes adding multiple universes to the theory, when in fact it is merely a mathematical re-statement of vanilla schrodinger wave function evolution.
That’s not *quite *true -here arelocal hidden variable theories - but they are incompatible with QM and so that’s fairly accurate for this discussion.
Effectively, yes, it is.
John Bell was wrong. I *like *DB-B for preserving observer independence as valid if nothing else, but liking it isn’t the same as agreeing with it. Nor do I agree it’s a strict “hidden variable” theory on the order of some of its descendants. I think Deutsch was right to call that family of theories “parallel-universe theories in a state of chronic denial.” Even though Deutsch was actually trying to include them in his favoured MWI camp. People, brilliant people, will go to ridiculous lengths to preserve strict determinism. I have no idea why. Start without it and really, Copenhagen or its derivatives, such as stochastic theory are looking pretty damn good.
Falsifiability is the essence of hard science. It’s no red herring.
Copenhagen is way more widely accepted. And experimentally, MWI looks exactly like classic QM, so it’s just a metaphysical preference, not a scientific one.
I don’t. Using Occam’s Razor as a bright-line eliminative isn’t science, it’s aesthetics. IMO.
I don’t make that mistake. But it is still not science. At least, not Popperian science, which is why I call it science fiction.
They’re perfectly self-consistent, and reproduce all of ordinary QM’s predictions.
Any interpretation of QM is unfalsifiable (or falsifiable only to the extent that QM itself is, in which case they’re all equally falsifiable) – Copenhagen as much as MWI. If you want to excise anything unfalsifiable from argument, then you have no business talking about the interpretation of QM. However, MW essentially consists only of pure quantum mechanics, while Copenhagen has to add unnecessary structure (to cause the collapse), and axiomatically postulate randomness.
And quantum mechanical evolution manifestly is deterministic, in any interpretation.
Brownian motion may depend on quantum mechanics in some specific realisation, in which case it is only non-deterministic if QM is. However, Brownian motion itself works just as well on a classical scale, where particles are little hard spheres bouncing around with a certain distribution of velocities. In that case, randomness is entirely based on our ignorance of this distribution.
Well, Copenhagen is still the interpretation that’s in the most textbooks, so it’s the interpretation of anybody who doesn’t really think much about the interpretation of quantum mechanics. But I’d be very surprised if decoherence based approaches don’t come in a close second, it just offers a far more natural explanation for the appearance of wave function collapse (actually, its offering an explanation at all already puts it a noselength in front of Copenhagen).
No, they evolved from a stance of “Hey, let’s just go with what quantum mechanics tells us rather than having something unknown do we don’t know what”.
As I said, Copenhagen is the interpretation in which randomness is put in axiomatically…
Anyway, my point wasn’t necessarily that QM tells us that the universe is deterministic, but that it certainly doesn’t tell us it is random – it is possible to interpret it both ways, and no experiment can decide between those interpretations.