Do humans have free will?

It’s a bit strange how you can take me to task for the continued use of the term ‘free will’, while simultaneously arguing to retain the term ‘choice’, where there manifestly is none. There may be a habit of speaking of choice in the case of complex systems, such as chess computers, but this just stems from the mistaken belief that there is something like genuine choice—which is exactly what you ought to want to root out.

We don’t talk about ‘choice’ in systems such as a rock falling in a gravitational field. But in your universe, human beings don’t have any more choice than that rock has. Calling this ‘choice’ seems far less in line with the usual concept of choice than my notion of free will is.

But that’s the thing: they’re not possible in principle in your universe. One might be deceived into thinking so, but that’s just for lack of information.

But that’s not the idea I want to refine, but rather the idea that it is meant to subserve, namely, that of responsibility for one’s actions, of agency—that’s what people want from their ‘free will’, after all.

Only in so far as that it carries misleading associations. On a deterministic model such as yours, the answer to ‘why did A happen?’ always essentially boils down to ‘the boundary conditions of the universe + the laws of physics’. That isn’t true on my model, as that set of facts is not sufficient to uniquely determine a given outcome; instead, every way to account for that outcome includes a process that is at the very least computationally equivalent to the agent making that particular choice—a simulation of it, or a copy, or what have you. So the set of causes of (or perhaps, reasons for) event A does not solely consist of {boundary conditions of the universe, laws of physics}—this set is strictly and logically insufficient to determine event A, in the same sense that the axioms of a given formal system are insufficient to determine the truth of the Gödel sentence. Rather, the set of causes must at least include {boundary conditions of the universe, laws of physics, agent choosing A (or a computational equivalent)}.

No, I think that’s a false equivalence.

“Choosing” is a word we use to describe a real observed phenomenon. We could abandon the word because it may be seen as entailing freedom, but that too (without careful explanation) tends to lead to confusion about just what our claims are. In my experience, motivated solely by a desire for clarity, the best way to state the position is to retain the word “choice” for the phenomenon, but to explain that our account of the process underlying the phenomenon does not involves the “freedom” that one might expect, rather that it is deterministic computation. Complex mental computation and a falling rock are clearly different things, notwithstanding that both are deterministic. So I really see no problem with retaining the term “choice” for the former but not applying it to the latter.

“Free will” (the could-have-done-otherwise intuitive free will) does not correspond to any observed phenomenon. Moreover, it is logically incoherent nonsense. So there is just no similar motivation to retain the term at all. And if you think compatibilists are achieving clarity by attaching the same term to an unrelated idea - well, I don’t think it’s working.

Actually, in your universe, that’s not true: no system ever has made a choice between alternatives A and B. Everything’s on rails, and always has been, so in particular, no choosing has ever been observed.

There is quite clearly an observed phenomenon: you ask me if I want meat or fish, when both are available, and I say “fish”. We could describe our account of that phenomenon as “I did not choose fish”, but I think that’s quite confusing. Clearly, in colloquial language, I did choose fish, because “choosing” is a label we all use in everyday speech for that phenomenon. So I think it’s clearer to take account of that and offer an explanation along these lines: “When I chose fish, that choice did not entail the freedom that you might intuitively expect; rather it was deterministic computation, and in fact in those precise circumstances only one outcome was possible”.

Whereas on my account, I can simply say ‘I chose fish’.

No you can’t. You’re the waiter. I chose fish.

Still, on your account, we are, strictly speaking, wrong when we talk about choosing—there is as little a matter of choice as the rock has when plummeting to Earth, even if it careens into obstacles on its way, as for instance with a board with pegs on it, such as is used in demonstrating the normal distribution—we may talk about the rock (or ball bearing, or whatever) choosing a certain path, but only metaphorically. Moreover, on your account, said metaphorical speech does not merely refer to something nonexistent, but to something impossible.

Whereas on my account, talk of choice implies just the freedom we actually do have, and imbues us with actual agency—we actually did make a choice, and in so doing, realized one alternative out of competing equally possible ones, in such a way that it is directly this choosing that is responsible for the outcome.

The “choice” is still arrived at solely through deterministic computation. I agree that there is a notion of agency here, but that’s always present: the thing that computes is causally responsible for the outcome in any account of choice. But the fact that an irreducible process is unpredictable does not make it “free”. I don’t think that in your account the “choosing” is any more in accord with intuitive could-have-done-otherwise free will than in mine.

All computationally irreducible processes are unpredictable in the way you describe, but I’m sure you would not claim that they all have free will. Perhaps your other conditions are intended to deal with this objection?

That’s not what I mean. The important part is that, contrary to what one might expect, and what is in fact true for simple systems, the additional choice process is necessary, in that without it, the outcome would not be realized; while for, say, a rock on a ballistic trajectory, all that is needed to derive where it will end up is the initial conditions.

The choice adds new information—it’s a creative process, in this sense. This is important: without this additional information, it is impossible to derive the outcome. That the information that is created is always the same, if it is created in the same way, is not any problem for agency, not anymore than that the fact that I ordered pizza yesterday is immutable implies that I didn’t order it freely.

Yes, that’s what the reference to intention takes care of. Many systems are free, but most of them don’t have a will.

I’m not Riemann, but I agree that I would still use the word “choice” to refer to an output of a complex system.

For example, I often use the example of a computer that’s running a manufacturing process of some kind. The computer takes in lots of inputs, and decides to adjust the temperature of the chemical etch in Vat 7. I would refer to this as a “choice” even though all elements of it were determined. A compatibilist would say that the factory computer has free will, since its decision was not externally forced upon it, but resulted from its programming and inputs. I think that’s a weird definition of “free will.”

I agree. When I hear people arguing along these lines, it is very difficult for me not to think they are defining “free will” in such a way as to make it an irrefutable concept. But if there is such a low bar to “free will”, then people wouldn’t wring their hands over who has and doesn’t have it. We would judge everyone’s decision-making ability the exact same if this is what constitutes “free will”. The fact that we don’t means that we implicitly understand that free will isn’t simply the ability to make a choice in response to the information that’s available to the “chooser”. Free will would be the exact opposite of this.

I’m with Riemann on this. I see no reason to believe that our universe allows for what you think it allows for.

The last clause here is incorrect as a plain matter of fact.

The historical justification for legal punishment, which works its way directly into the sentencing for many crimes, isn’t just about deterrence, isn’t purely utilitarian about most improving the lot of the most people or getting the criminal to reform. Legal punishment is partly about retribution: treating criminals in the way that they “deserve”. The notion of retribution is embedded into the law, right next to notions like rehabilitation and deterrence. If the majority of people accepted the notion of a deterministic universe, that might not be the case. But a sense of proportional retribution seems to be instinctual to us, probably for solid evolutionary reasons.

It’s extremely easy to define “choice” within a deterministic framework.

I have had conversations with other determinists, and we all speak openly about making choices. We have a specific idea in mind, quite precisely defined, when we speak in this way. And if any believer in “free will” (whatever that’s supposed to mean) were to listen to our conversation, they wouldn’t have any instinctive problem in following that particular part of the conversation. This is because a reasonable deterministic definition of “choice” actually nests the more intuitive/subjective/hazy notion that most people tend to carry around in their heads.

My main purpose in these threads is never to convince people about yay-determinism. That’s a very difficult argument that takes a lot of time. Much easier and straightforward to just make it clear that the things we see around us are not at all incompatible with us living in a deterministic universe.

And yet we do talk about choice when we’re running computer simulations of sufficiently sophisticated agents embedded into constructed worlds.

There is a very specific reason that we use this word in this context: it saves time. It’s the same basic reason why we talk about temperature and pressure, even though those are aggregate phenomena and the real action is happening from the interactions of countless little particles: it’s very often more convenient to the purpose at hand to have a discussion of aggregate events using aggregate words rather than, so to speak, trying to solve a system of a hundred million equations.

These don’t strike me as particularly charitable statements, given the context of the conversation.

I seem to have missed where Riemann started calling people morons.

That’s very easy for a person to say, or even fool themselves into believing.

But saying it doesn’t make it so.

When the word ‘choice’ is applied to a deterministic system, it’s in a metaphorical sense, because we don’t have the necessary information/computational resources to eliminate our uncertainty of the outcome, so it appears to us as if the system ‘chooses’ an outcome. But if the notion of choice is itself incoherent, as Riemann argues, then we have no grounds for applying the metaphor: there is not, in fact, a process of choosing that a deterministic system’s behavior could resemble—that sort of thing simply doesn’t exist in the world. So what are we saying when we say that a complex, deterministic system ‘chooses’, if that word doesn’t refer to anything that can even be coherently formulated?

Of course, you can say, screw that, I’ll continue talking about choice. That’s fine by me. All I was complaining about was the idea that my referring to ‘free will’ is somehow illegal, whereas the determinist’s calling upon the concept of choice is perfectly fine. I think there’s a far greater continuity between what I call ‘free will’ and the folk-theoretic sense of free will than there is between deterministic ‘choice’ and the everyday sense of the term—the latter, after all, implies the realization of one alternative out of equally possible options, while on the former, that notion is simply nonsense; so it’s essentially the diametrical opposite.

Well, I’ve given a detailed description of just how that comes about; if you see any fault in that, or have any question, I’d be happy to discuss it with you.

It’s true that historically, guilt was taken to be a justification for punishment; but well, historically, people have been wrong about all sorts of things. There simply is no reason to accept a moral axiom of the form ‘guilt merits punishment’—no matter how many people may have done so historically.

Again, the right angle of attack here is the belief that ‘guilt merits punishment’, which has nothing going for it except lots of unfortunate historical ballast. Whether the universe is deterministic, indeterministic, or even allows for ‘could have done otherwise’-style choice is then completely beside the point.

Actually, there’s at least one point of data that’s hard to explain, if we assume that there is no genuine mental agency: namely, that we feel like we do have mental agency.

I know, the usual story here is that evolution selected for our having the illusion of mental agency, in order to keep us happy, or whatever; but that story is circular, since it ultimately depends on us actually having mental agency. After all, evolution can act only on the causes of behavior: genes that cause us not to reproduce, for instance, are weeded out. But then, in order to select for the illusion of having free will, said illusion must be causally efficacious in some way; but in a (naively) deterministic universe, where mind is just along for the ride, there is no way for it to do so—as long as we behave the same way whether we believe we have free will or not, evolution can’t select for belief in free will; and if we behave differently based on which of the two we believe, then we do have agency after all.

Which, again, is metaphorically intelligible if choice, itself, is a coherent concept; but if it’s not, then I’m left wondering what exactly you could mean when you talk this way.

I meant specifically his remarks to the effect that ‘could have done otherwise’-style free will is obviously incoherent, and yet, that half the people in this thread believe it, and writing it off as ‘magical thinking’.

Yeah, it’s just a tiny bit presumptuous for you to think that you have that sort of insight into my motives, I’m afraid.

(For one thing, you might look at some past discussions on free will, where you’ll see me float many of the same arguments against it used by yourself and Riemann in this thread—as well as some additional ones: has the regress argument even been introduced yet?—, but well, I eventually got better. So I’m certainly not a believer in free will because I want to believe; rather, further reflection on the topic has caused me to revise my earlier stance. Much to my chagrin, in some ways: it was always a kind of relief to believe that those who wronged me couldn’t have done otherwise, that it was in some sense not their fault.)

This seems like a good point to jump back into the thread upon - the difference between compatibilists and noncompatibilists, at its core, seems to be that noncompatibilists believe that there is something beyond the rational, analytical, calculating, determining, deterministic processes in people’s minds that make people make the decisions they do. And this is the part that makes no sense to me. What is the goal here? Compatibilists believe that people think with their minds, and that their minds determine their actions. Noncompatibilists think that…randomity? Magic?..weighs in on decisions in a way that precludes or overrides rational thought. But what does this buy them? What is added by claiming that decisions are not made by thought, within the deterministic mechanisms of the brain?
Beyond that really weird point of division, the other problems between compatibilists and noncompatibilists seem to be confusion over terminology. Choice, for example, is the process of choosing an outcome from many possibilities. This is totally possible for a deterministic agent to do, of course; they just are subject to their own internal rules and state in determining how they arrive at their decisions. You know, like actual people are. “Free” choice, on the other hand…means exactly the same thing, I guess? I’ve never heard any coherent explanation as to how you can get freer than relying on your own personal knowledge and opinions to make a decision.

It took me a while to read this, because, at least to me, compatibilists are somewhere between those who believe in libertarian free will and those who say our brain operates according to physics, and we don’t wish to call that “free will.” So you’re grouping the two diametrically opposite positions into one term.

I’m not a compatibilist, so I guess that makes me a noncompatibilist? But I don’t believe in the magical kind of free will, I think our brains are physical machines and it’s not helpful to redefine the concept of “free will” to be what we know the brain does.

“Libertarian free will”, as best I can tell, requires magic, or at least a callous disregard for how people actually think. It posits that the mind doesn’t operate on logical processes - it posits that decisions are not made for reasons. Because decisions made for reasons are not random and, if one knew all my reasons and how I weigh each one, then they could could predict how I would decide things with perfect accuracy.

The term “free will”, without the “libertarian” prefix, seems much less magically defined, and honestly I’ve never heard it defined in a way that would preclude, say, a very complex robot from having it. All that vanilla free will seems to require is an agent - that is, a processing mechanism with a distinct status and access to a distinct set of knowledge at any point in time - that has the ability to consider and choose between actions/responses based solely on its internal state and knowledge, without interference from anything outside the agent’s mind (and particularly without interference from any outside agent).

There’s nothing about that kind of free will that is incompatible with a deterministic universe - or with a universe that has randomity in it either. Which brings me to another point - whether or not the universe itself is deterministic, I posit that human cognition acts in a basically disterministic manner, correcting *out *any randomity that may be present in the process. Because randomity isn’t a useful input into cognitive processes; acting randomly doesn’t aid in survival or happiness. It would be static noise in the decision making process, and even if it added unpredictability, it wouldn’t add “freedom” in any real sense, since randomity would function as something external to the agent’s mind impacting it, pretty much the opposite of what would be required for it to meet the definition of free will.

If we reject the idea that randomity has any useful place in a free mind, then all that is left is a deterministic, reasoned process behind decisions. Whether you think this process takes place in the physical brain or in some weird magic extraphysical glob of consciousness is largely immaterial - if your mind has reason and coherence to it, then it operates in a compatiblist way, just on the strength of coherence and consistency over time alone.

I have never said this. I said that could-have-done-otherwise free will is an incoherent idea.

You want to treat “choice” and “free will” as synonyms. I have explained why I don’t, and they are not synonyms in colloquial English. I don’t regard my use of “choice” as metaphorical, it’s simply a label for the everyday phenomenon of using complex computation (often by a brain) to make decisions. The account of the process underlying “choice” as either deterministic or “free” should not, in my opinion, force us to avoid using the colloquial label for the phenomenon under discussion.

And as I’ve said a couple of times in this thread, whenever the no-free-will position is stated as “we do not choose”, it tends to be misunderstood – so, although I know that you understand my underlying position, for clarity I’d prefer that you don’t represent my position this way, even if you feel the words are synonymous.

And yet the legal system as currently constituted, not just historically but even today, continues to use that axiom as one of the guidelines (not the only, but one) for sentencing.

I agree with everything you wrote in this excerpt, but your admirable ideals don’t actually describe the present legal system. This is, I believe, part of the the point Riemann was making. If the majority of people had more sensible views on determinism (in our view of “sensible”), the legal system would very likely be better. Your own view seems to be that that would be sufficient for legal change, but not necessary. There are other possibilities, other paths, other angles of attack, to get to a more fundamentally just legal system.

“Guilt merits punishment” has got a helluva lot more going for it than just historical ballast.

First, there are strong game theoretical reasons why punishment of transgressors is necessary in order to achieve large scale cooperation and coordination. And because punishment is so necessary, it makes sense that we might develop an instinct to appreciate – if not always “enjoy” – the punishment of the guilty on a fundamental level. Psychological experiments have shown that people will even sacrifice some of their own personal gain to be given the opportunity to bring “justice” to those who violate basic rules of fairness within a game. “Guilt merits punishment” is just the sort of moral instinct that might help group cohesion.

Second, and maybe just as important, the idea of “free will” as popularly conceived seems to fit readily into the same picture, as the reason why guilt might merit punishment. This is all of a piece. If a transgressor has chosen to violate the moral principles of fairness and justice out of their own “free will”, then one might more easily argue that they are fully “deserving” of the punishment that is meted out to them, and in fact, that it might be a moral violation to not punish someone who is deserving of such. These aren’t my views, but then, those aren’t my views precisely because I find the notion of “deserve” to be totally preposterous given my current beliefs about how the universe actually seems to work. That’s why I choose the angle of attack that I do, and I imagine Riemann has similar beliefs. Nothing is likely to work, but a reasonable belief in determinism strikes me at the moment as the most plausible in a set of unlikely prospects.

Ideally, that would be true.

And yet one of the most common responses in “free will” discussions is people panicking over the justice system, or more generally, over how people would behave if they didn’t believe in “free will”. I would guess that your admirable ideals make you a small minority among the actual population. There are strong psychological underpinnings, I believe, that run directly contrary to your ideals and also to mine.

This is another reason why I try to limit my contribution to these threads to arguing that a deterministic universe actually is a consistent position, rather than the more ambitious task of trying to convince other people that they should be determinist also.

I don’t find that particularly hard to explain.

Of course, I would say that. I appreciate that others have a different perspective. But I wouldn’t want to talk about your “usual story” of evolution, or whatever, without first discussing more fundamental rules for deciding what is reasonable to believe, and what not. It is those rules for deciding what is reasonable to believe that are at the foundation of everything else.

Compared to some of the other stuff in this thread, this is a very, very, very simple idea.

In the past, I’ve gone into this sort of discussion in more depth, and I can dredge that up if you’d like. But really, it’s not remotely an issue to define “choice” sensibly and coherently in this sort of setting.

One of my go-to questions whenever I’m discussing “free will” is: what is the physics of it? How is it supposed to work? Most people haven’t thought sufficiently long about this issue to even have an answer to the question, or worse, they don’t think the question is relevant.

To your credit, you have considered this. You have an answer. You’ve given a detailed description of how you think it comes about. I appreciate that, because it’s more effort than pretty much anyone else puts in.

So at this point, what I’m not sure about is why anybody else in the world would ever believe that description of yours if they weren’t already inclined to do so.

I don’t have any questions.

I just have no inclination to believe your system, for essentially the same reason I have no inclination to believe in Last Thursdayism. It’s possible that the world came into existence last week, with all of our memories of before then also created last week. But there are compelling reasons not to believe that, and I would argue that the same applies to your own description.

You mean like this?

Apparently you are a tiny bit presumptuous to think you have enough insight into other people’s motives that you can assert other people WANT the world to be a deterministic PUPPET SHOW. That is what they WANT, according to you and your keen insight into the motives of others.

So let me be clear about something.

I don’t trust anybody’s motives, on pretty much anything beyond eating, sleeping, shitting, and fucking. If someone tells me, “I was hungry, so I ate” or “I was horny, so I fucked”, then okay. I’ll believe that. But anything deeper into the human psyche, and I’m immediately dubious. I don’t even trust my own motives, when my brain offers me a supposed reason for why I just did a thing. I go with the reason I thought up as the best working assumption, but I don’t trust it because I’ve read far too much psychological research that says we as a species basically have no fucking clue why we do the things that we do.

So no, I don’t particularly trust your self-described motives, at least as long as you claim to be human. And I don’t particularly think you should trust your own self-described motives either, and if I happen to throw out some sort of motive for the things I say, I don’t particularly think you should trust me on that either. (I know that I don’t trust me, so you’ll be in excellent company.)

And I say all this even though, honestly, my comment was only about a tenth as presumptuous as yours. Mine, at least, is backed up by research into introspection illusion.

Indeed, they aren’t; choice is a notion derived from free will: if we have free will, then we can realize either option A or option B. If not, that idea makes no sense.

Really, my fundamental gripe is the following: you take away what is the most crucial element of the notion of choice—the idea of options, the idea of either/or, of alternatives—and want to keep calling it ‘choice’. I propose something that preserves the most crucial element of the notion of free will—the idea of agency, of responsibility for one’s actions, of being, in a meaningful sense, an actor in the world, rather than just something acted upon by it—and you want to bar me from calling it ‘free will’. So, what I want is consistency, is all.

Which wasn’t my intention—hence, my agreeing with Riemann’s call for reform of the legal system. It’s just that metaphysics isn’t the right place to start; what we fundamentally need to figure out is what works, and what doesn’t.

No, actually, I don’t believe changing people’s views on determinism would be sufficient: belief in a deterministic universe could just as well be used to justify all sorts of cruelties—after all, it wasn’t you who’s responsible for hurting somebody, it’s just the boundary conditions of the universe; you couldn’t have helped it. It’s a question of the ethical superstructure, not one of the fundamental workings of the universe; which is why I propose starting there.

No; what you want in game theory isn’t punishment, it’s deterrence, which can be equally well meted out by a proportionally smaller reward. You need for people to prefer certain options, and disfavor others, at least if they’re the rational fictions game theory substitutes for people; punishment isn’t necessary for that.

That’s another thing entirely: I don’t disagree that we have some instinct towards punishment, which may even have been helpful in our evolutionary past; but that’s no reason not to try overcoming it.

Free will doesn’t give any reason for why guilt may merit punishment; that’s a logically entirely separate idea. I could just as well accept an axiom that ‘guilt merits reward’, and act accordingly.

Well then, go ahead and explain it—because I certainly haven’t heard any really convincing explanation, and my failure to come up with one was a large part of why I abandoned my old views on the nonexistence of free will.

Well, it’s not really an optional thing: if what I say is right, then it’s simply the way the world is; then it’s simply the case that sufficiently complex systems are free, and that, if that complex system is one that has intentions and goal, we have a system possessing free will.

And well, you’re free to give those reasons; but just saying ‘I’m not buying it’ isn’t really a very interesting contribution in a discussion. Plus, if you’re claiming that belief in my proposal is optional, and you choose not to believe it, you’re proving what I asserted earlier—namely, that there are people who believe in a deterministic world because they want to do so.

No; not at all, in fact. I was making an inference from Riemann’s and others’ behavior; that inference may be wrong, and Riemann may be arguing for determinism despite agonizingly wanting to be wrong, in which case, I’m happy to be corrected.

That’s not any different than what you’re doing here:

But what you’re doing with respect to my own argument is insinuating that I essentially am dishonest in my opinion—towards others and myself—, to which I have only come because I want it to be true; that I’ve essentially deluded myself into following my unconscious wishes. Furthermore, the evidence I gave that that’s not the case—that in fact, I used to hold a different point of view until quite recently—gets summarily ignored by you. That’s a quite different league right there.

Well, you yourself have just given as a reason for not believing in my proposal that essentially you don’t want to do so, so I kinda feel vindicated there. Also, I know that there are people who want the world to be lacking of any free will because I used to be one of them—and while I’ve since come to embrace the idea of free will, as mentioned above, I still think it has its drawbacks; but ultimately, I had to change my mind in light of the arguments brought forth against the position.

Is it possible that I am deluding myself, that I secretly always wanted to believe in free will, only pretending not to for… reasons, I guess? Absolutely. Do you have any handle on me or my beliefs in order to render such judgment? Absolutely not.