Do humans have free will?

Have you met a single human being who believes this world to be deterministic who has espoused that?

I haven’t.

But I’ve come across plenty of people who weren’t of that persuasion who nevertheless espoused your suggested conclusion based on a belief that they didn’t hold. Funny how that works.

Along a one-dimensional axis of utility, there is no distinction here. Denying agents a reward is equivalent to a punishment.

If you’d prefer to call that “deterrence”, I’m not going to quibble with you here. But if I’m teaching it in a classroom, I’m going to call it punishment. These sorts of ideas are frequently referred to as “punishing defectors” and “punishment strategies”. This is not a terminology I made up myself.

I agree with everything after the colon, which is why I don’t agree with the first clause before the colon.

We both put some basic moral instincts into this category. We agree on this: we as a species should try to overcome any potentially inherent instincts if there are deeper benefits from doing so.

And I happen to believe that “free will” belongs to exactly the same category. More on this immediately below.

It does among actual human beings. That’s why the two ideas are so incessantly brought up together.

This is basically my single biggest perplexity with your posts. You want to divide everything into neat logical categories and discuss them separately, without any regard (that I can personally see) for how real people really argue these things simultaneously and together as one piece.

And yet your single biggest piece of evidence that we have “free will” is psychological. Well, “guilt merits punishment” is similarly psychological, and I’m telling you that from the conversations I’ve had, these do not appear to be “logically distinct” categories inside of other people who aren’t you. It’s all of a piece. Every psychological impulse backs up every other one. This is why my personal “plan of attack” is what it is.

For the record, I don’t have an overwhelming belief that I’m right about this. It’s just my inclination. But up till this point, I haven’t seen good arguments against this inclination. Your (fully correct) point that these are, strictly speaking, logically different topics doesn’t move me, not when your chief piece of evidence in this thread is psychological, and when their own instincts to punish are similarly psychological.

This will take time. Maybe a few days.

I appreciate that you don’t find it interesting. That’s perfectly understandable.

I, personally, do find it interesting which is another reason (maaaaaybe…) why I typically keep my goals limited in these sorts of threads.

This isn’t remotely true.

I believe some things that I would rather not believe.

I want to believe that I’m a good person.

I’m not convinced that I am. Better than average? Maaaaybe. But that’s not saying much when the average is low.

(“Determinism”, however, is not in this category. I could take it or leave it. I just haven’t seen any good arguments against it.)

“Not any different”? I can think of at least three differences off the top of my head.

First, I said that I “imagined” that might be the case. You didn’t. You made a straight assertion of another person’s motives without any qualification whatever. Second, I allowed myself the latitude of that mild imagining (not an assertion) because we’re both determinists and I see an affinity in our arguments, but I hope I wouldn’t say that I can “imagine” why a young-earth fundamentalist Christian believes what they believe because I’m not a member of that club. I don’t have the same affinity. Third and perhaps most important, I wasn’t using my imagining as a basis to attack the motives of another human being.

Those are some big differences.

Just because I don’t respond line-by-line-by-line does not mean that I am “ignoring” what you’re saying. I’m not. It’s just the case that I’m unconvinced.

More generally: there are a lot of internet apostates out there. It’s admirable when someone changes their mind for good reason, but just as often, I see people who had sandy foundations for their beliefs in the first place. The decision to switch from one belief to another belief isn’t always from deep consideration, but merely because the sands shift and they land somewhere else seemingly at random.

I am not “insinuating” that I believe you are particularly of this kind. You can logic. You can cite evidence. You have an excellent chance to not be of this type.

I’m saying that I’m not convinced one way or the other. I’m not “rendering judgment” on you, as you so nicely put it. I’m saying the “trial” isn’t over yet, to keep riding your metaphor. I don’t know you. I can’t recall interacting directly with you before. I haven’t seen someone write two thousand words to explain a fact to you, and then you appear in another thread a few months later repeating your previous mistake.

I don’t know what kind of poster you are.

But in my experience, it’s the people who pat themselves on the back about their own pure motives that are the most suspicious. I’m not telling you that I have “rendered judgment” in advance that you are of such character, I’m saying that your assertion of your own motives is something that I find totally unconvincing as evidence, given the copious psychological research that says people are basically clueless about the reasons why they do things.

I don’t want you to think I’m “ignoring” your request, so I won’t respond again until I have a meaty reply.

The problem I always have with this is that I need to start in a place where (I believe) absolutely no one would disagree. So it feels patronizing to explain things that no one needs to have explained, like I’m talking down to other people. Every step of this logical chain feels “obvious” to me, so I can’t suss out where “obvious” becomes “non-obvious”. So I have to start in the patronizing place, and then take tiny steps until I’m in a more controversial place. That’s why it takes time. I don’t want to sound more like an asshole than I actually am.

Well, I can’t claim to ever have taken a statistically significant sampling of what determinists do or do not believe; the point I’m making, however, is that it’s irrelevant to moral/ethical issues whether you’re a determinist or not. Really, Hume taught us almost 300 years ago that there’s just no implication from is to ought—to think otherwise is to commit the naturalistic fallacy. That many people’s thinking on the matter is, in fact, fallacious really doesn’t have any impact on that; if everybody thought pigs could fly, they’d still plummet to the ground.

Granted. But the goal is not to punish people because they’re guilty; the goal is to get them to disfavor certain strategies. Tying this to the question of guilt and punishment in the legal system is a misleading equivalence at best.

Because I prefer to first get clear on the basics, rather than to start in some muddle of historical and evolutionary accident. “Real people” may squabble incessantly about the aerial capabilities of pigs; doesn’t mean I have to pay any attention to that.

You say that as if it somehow devalues evidence that it is psychological; but of course, all evidence is psychological first. It’s only under the assumption of a—not at all obvious—metaphysical thesis that there is a world out there that in some way corresponds to our sensory experiences that we get any other kind of evidence. So really, psychological evidence, if anything, ought to take first consideration.

And indeed, part of my reason for taking the idea of free will seriously exactly is my desire to hold fast to the above metaphysical thesis: because if mental content has no bearing on behavior, evolution cannot select for mental content that is appropriate to the world; and hence, we ought to expect that our mental content is, in fact, not at all in correspondence with the outside world. After all, evolution can’t distinguish between an early hominid believing ‘there’s a Tiger, I should run away’ or ‘there’s a Tiger, I should go and hug it’ (or indeed, ‘there’s a platter of pancakes, I should eat them’) if both beliefs lead to the same running-away behavior.

Furthermore, there’s of course also plenty of evidence of the good-old-fashioned, respectable, non-psychological variety: there indeed are systems such that their properties are logically independent of the state of the universe at a given time; there are indeed systems that are computationally irreducible; and there are indeed systems with goals and intentions (although I suppose one might quibble that the evidence here again is “only psychological”). And if there are systems with goals and intents that are computationally irreducible and that can realize outcomes that are logically independent, which is the case exactly if the human mind can function as a universal computer, which it can, given enough resources, such as scratch paper, then there are systems such that it is their decisions, their intentions that are directly responsible for bringing about certain outcomes; and that’s all the free will that is needed.

It’s not; it’s ethical, normative: it tells you what ought to be done in a certain situation.

I don’t think I’ve ever seen anybody return to a discussion after such an announcement; but maybe you’ll surprise me!

What I meant by that is that it’s not a substantive contribution to a discussion to say, without giving any reason whatsoever, that you’re not buying it. I mean, of course, you’re free to do that, and you’re free to believe that pigs can fly, but what am I to do with that contribution? Say, well that’s nice, have you noticed there’s weather outside?

What isn’t—that you essentially said you don’t believe in my proposal because you don’t want to? Here’s what you said:

The only way I can read this is as saying that my proposal, like Last Thursdayism, is a possibility, that however you choose not to believe. If you meant something else, I suggest you expressed yourself badly.

And I’m reasonably confident that Riemann indeed thinks that it’s a good thing the world is deterministic, as he believes; at least, he’s given every indication that this is the case in this thread, and that’s all I have to go on.

I, on the other hand, haven’t given any indication that I believe in free will solely because I want to, that I’m deluding myself, and that I’m ultimately dishonest in my beliefs. Yet that is what you felt the necessity to allege.

And neither did I. If you re-read that exchange, Riemann was to one to bring up the charge that my reason for believing in free will was just out of ‘populist desire’, and that I should equally well hold to a belief in Jesus and the afterlife because of that. I rejected that charge, and responded that there are also people who want to believe in a deterministic world, of which I think Riemann is one.

But again: do you have reasons for not being convinced, or do I just have to accept that without the opportunity of defending myself?

I’m sorry, but two posts after extolling your own virtues regarding your skepticism towards your own motives and reasons, that kind of rings hollow…

That’s exactly why I haven’t merely made assertions; rather, I took the time to dig up evidence regarding my past views, and my justifications for them, so you can pass a more informed judgment. Of course, I’m not saying that’s required reading or anything—god knows there’s lots I find embarrassing in these posts. And in all likelihood, there’s lots I’ll find embarrassing in this post at some point in the future, too! But while change isn’t a definite indicator of progress, lack thereof is a certain sign of stagnation.

Do take all the time you feel is necessary. And don’t worry about saying obvious things—it’s better to re-start at the foundations than to falsely assume a shared understanding. Daniel Dennett has made the point that much of philosophical discussion ultimately goes misunderstood because nobody wants to insult their opponent by explaining the basics, but those basics are often less clear than one might assume. I tend to err on the side of over-explanation myself—which is why my posts generally approach essay length…

As a person who is forced by logic and observation to believe that human minds are deterministic, regardless of whether the universe is, I feel like I’m a reasonable person to respond to this.

A choice (noun) is the option to take one of several actions.
Choosing (verb) is when an actor takes one of several actions.

Choices don’t even require actors to exist, much less for there to be free will. Take for a classic example, a fork in the road. Will you choose the road less traveled, or the one that’s more traveled? Will the guy behind you? And the one behind him? All three of you face the same choice, and the fork, and the choice still exists once all three of you have gone, waiting for the next agent to wander by.

Now, I would expect you at this point to get annoyed and claim that the choice doesn’t actually exist until the actor pulls up to the fork; that it’s necessary for the actor to recognize that they have options before you can really consider there to be options to choose from. And that’s okay; at that point you’re talking about a sort of instantiation of the choice - a specific instance of choosing. And I’m okay with that - as long as we don’t lose track of the fact the choice still isn’t part of the actor; it’s a set of circumstances the actor is reacting to.

Now, the act of choosing. This is where the actor gets involved, obviously. They have to observe the options, recognize that they have options, and pick amongst them. However, they don’t have to be smart about it. Consider simple robots designed to run mazes, like mice. Consider these five robots:

  1. One that always turns right, given an option.
  2. One that always turns left, given an option.
  3. One that chooses randomly between options.
  4. One that alternates between choosing left and right.
  5. One that chooses randomly but remembers the route it took, draws a map, and knows to backtrack to an untried route when it runs into a dead end or has fully explored a branch.

For starters, everyone in the business of working with such robots will refer to their decision-making processes as “choosing” - heck, the language basically requires it. And while you could maybe argue that argue that robots 1 and 2 aren’t ‘really choosing’, but 3-5 definitely are. 4 and 5 are even exercising memory of their past actions and adjusting their actions accordingly, with varying levels of ‘intelligence’.

Also, note that it’s entirely possible to hold simple machines “accountable” for their actions. The roomba that just spins in circles will be examined and possibly scrapped, and the printer that mangles all the paper that runs through it will suffer similar judgement. You could argue that this judgement is different than what we apply to criminals and kids that color on the walls, but I would ask how it is different? And before you say it’s different because we have the option to physically modify and repair broken machinery, there’s a thread about sterilizing criminals in this forum as we speak.

Really, the thing to keep in mind is that even if we live in a deterministic universe and all agents are similarly making their decisions based on deterministic, nonrandom reasoning, that doesn’t change the fact that they are agents. They used a decision-making process when they decided to stab their neighbor, and the other agents standing around are using decision-making processess when they decided that Stabby McStabberson is bad to have around and they should make an example of him to discourage future agents from deterministically choosing to going all stabby. (Yes, even in a deterministic universe they can be discouraged, by altering their knowledge set to include the fact that stabbers are executed by beheading which looks painful and unfun and prevents future watching of Netflix.)

But Stabby McStabberson was responsible - the mental processes that led him to the act may have been deterministic, but they were still located inside the head of Stabby, and the fact that people around him don’t like those mental processes is why they removed that head. “I couldn’t help myself” is not considered an excuse for committing a crime now - why would it be considered to be one any more in a universe that’s known to be deterministic?

Also, it’s worth noting that just because Stabby stabbed his neighbor at that time and place immediately following being informed that Neighbor McNeighborson was shtupping his wife, daughter, and cat, that’s not actually evidence that Stabby will always decide to stab people he runs across. The situations are different in each instance of potential stabbing, and thus the deterministic reasoning that might/might not lead to stabby reactions will also be different. Which is why in a deterministic universe you wouldn’t necessarily conclude that Stabby would stab again. If you noticed a pattern of behavior that did indicate that Stabby tends to stab in situations where stabbing is inappropriate, then you lock him up and throw away the key. You’ll note that this is exactly how things work in real life - because people in real life are deterministic, or at least they’re not dissimilar enough from deterministic people that we’d treat them any different.

Honestly, if people behaved in flagrantly non-deterministic ways, there would be no reason to punish them, because the fact that they did one random unreasoning stabbing once is no indication that they’ll do it again. Only the presence of persistent memories and preferences and consistent determined cognitive processes gives one justification to predict future behavior based on past action.

I think this is already confused. Choices aren’t disembodied abstracta floating around in some Platonic realm; all that there ever is are, as you call it, ‘instances’ of choices. So there’s me arriving at the fork in the road, and the person after me, and that after them; each of these is a different instance of choosing, because, if nothing else, it’s always a different person making that choice, with a different history, a different mindset, different goals and desires.

And then, in a deterministic world, if you take all of that together, in each of these cases, the choice disappears: all of the data of me arriving at the fork directly determines that I’ll go left; the data of the person after me will directly determine that they go right; and so on. In each of these cases, the other option might as well not exist, because there is no chance of it being chosen (of course, that’s only partly true: the nonexistence of the other option will generally affect the data about the choice, and thus, possibly influence the outcome).

(I should perhaps point out that I’m somewhat sloppily using ‘deterministic world’ to refer to a world in which there is no free will, for brevity, so as not to cause confusion.)

Again, I don’t think I see any need to incorporate abstract choices into my ontology, so I don’t accept that there is some ‘choice’ apart from each concrete instance of choosing.

Well, everybody involved with startups refers to unexpectedly well-performing ones as ‘unicorns’, but that doesn’t mean that there are any horses with single horns out there.

Really? What’s the level of complexity needed in order for there to ‘definitely’ be choice? I mean, we agree that a stone falling in some gravitational field doesn’t choose, but what about if it starts an avalanche? And in the end, a robot’s behavior isn’t anything but a very complex avalanche (of electrons in circuitry), albeit with precisely set starting conditions.

In the sense that the measures we take in order to modify their behavior are of a purely corrective nature, and not punitive—i.e. we’re treating them in a similar way to how we treat, e.g., avalanches: employ measures designed to ensure that they do not cause harm, for instance. Nobody would hold an avalanche responsible for destroying their house, but it makes good sense to try and mitigate their destructive potential.

Yes; that’s fundamentally the same as what we’re doing with avalanches, etc.: try and
prevent them, or at least, ameliorate the harm they cause. In some ways, this is perhaps a better approach to justice than the punitive measures we use now, but again, that is quite independent from whether the world, at bottom, allows for free will. Is, ought, and all that.

Again, that’s not the point I’m making. What I’m saying is merely that whether one considers the world to be deterministic carries no implications towards how we ought to treat criminals (although it may carry implications on what sort of treatment achieves the goals we have, be they rehabilitation, punishment, reward, or apathy, those goals themselves are not affected).

Again, this carries a commitment to a certain moral stance: that punishment is only appropriate if people are likely to repeat their behavior. But one could equally well subscribe to a stance that says punishment is warranted simply due to guilt, and thus, punish even random behavior.

Even here, one could quibble, but we’ve probably bothered Mr Hume enough for now.

I told you you’d get annoyed and insist that instantiated choices are the ones that matter! Called it! :smiley:

It doesn’t change my argument one whit. A choice is something an agent experiences; it is a situation. The fact that the situation is unique with any instance of decision-making doesn’t change that fact one whit.

(For the record, by the definition of the word the instantiated definition of the word is definition 1, and the abstract, ‘the fork is the choice’ meaning is definition 3. Both valid definitions! :smiley:

The choice doesn’t disappear. Check the definition again; the act of choosing is the choice. The fact that you chose something doesn’t erase the past and eliminate the fact that your brain looked at the options, weighed them via their own preferences, tallied up the values, took a sidetrip to think about dinner, and then finished collating the data and came to a decision. The event occurred, period. The choice was set before you, and the choice was made, and and you chose something. Under any view of the situation, whether you believe that you used your brain to determine a response or if you believe you used fairy magic to non-determine a response, in all cases the decision is equally made and has exactly the same number of outcomes.

I put forward that you’re talking about something else other than actual choice here, that you feel is lost when you make a decision by determining the result with your mind. Lacking a better word, I’ll describe it: you feel that you’re losing the ability to actualize potential alternate histories in the case of what ifs. Not that you ever had the ability to actualize potential alternate histories, regardless of your epistemological model; actual reality doesn’t allow it. And of course even deterministic realities are cool with speculation about potential alternate histories; you just can’t actualize them, because at the time of making the choice you made the choice you made.

That’s okay; I reject your definition of “free will” anyway, because I believe it’s incoherent.

That’s cool; as long as you accept that the instantiated instance of choosing is unique and occurs only once then choices are fully compatible with determinism.

Sure - but the robot makers are using the actual common definition of the word choice. Check the definition - there’s nothing at all in it that says it’s restricted to humans and robots can’t make them. The processing within a robot’s brain - even if it’s just ‘see corner; turn left’ explicitly matches the definition of the term.

By the definition, no complexity is required for there to be a choice; the definition says nothing about the complexity or the mechanism for making the choice at all. However for an observer to be ‘definitely’ certain that a choice was made it has to be clear to the observer that the agent in question was aware that a choice was happening. In the case of an ‘always turn right’ robot, it could have just been trying to turn all the time, gringing against the wall until the wall fell away. 3-5, on the other hand, are demonstrably aware that the intersection is there, and are reacting to it specifically. Moreso with each successive robot.

Are you kidding? Many, many appliances are scrapped simply because their owner has gotten pissed off at them one too many times and is no longer willing to put up with their crap. This isn’t rational, of course - but neither is it rational to punish humans for punitive reasons. Humans aren’t that rational a bunch, all told.

We are most certainly not punishing avalanches with the intent of scaring other avalanches into not happening. In case you just suffered from a momentary reading comprehension failure, I was specifically referring to the fact that people in a deterministic universe are agents who can respond to and react to information. Because they’re people, and all, and all the reasons why we carry out punishments work exactly the same in a deterministic universe as they do in ours. Which isn’t surprising, since as far as people’s minds are concerned our universe is deterministic - what we know, feel, believe, and think determines how we act.

I will wholly agree whether or not the universe in general is deterministic has no effect on anything.

If human minds, on the other hand, are not deterministic and don’t made decisions based on reasoned criteria, then everybody will be acting completely randomly and irrationally and thus most reasons for punishment (other than vengeance) do not apply.

Oh, other reasons to punish may apply; I was just talking about the one that time.

Great! Then why introduce this strange abstract Platonic choice in the first place?

That takes me to a google search whose first entry is something from the German pokemon wiki for me. I presume you weren’t trying to make a point about how to fight anime monsters…

There’s no act of choosing, though. Because there’s no options to choose between.

If I throw a ball bearing into a Galton box, and it ends up in one of the slots, would you say it chose that slot? If not, what, besides perhaps complexity, is the salient difference to what you call ‘choice’ in a deterministic world?

I’ve been quite open about what’s lacking regarding choice in a deterministic world: options. Given complete knowledge about a given situation, the outcome of any so-called ‘choice’ is fixed. This isn’t the case on my model (and in reality: we know, for instance, that complete knowledge of all of the facts regarding certain kinds of many-body systems doesn’t determine what a measurement of the systems’ spectral gap will yield; nevertheless, every such system either has such a gap, or not—it’s not an issue of quantum randomness, but merely one of logical independence).

And like everybody else so far, you simply baldly assert this without even an attempt at argument.

Also not a position I hold, so I wonder why you bring that up.

So can an avalanche: the form of the terrain, for instance, constitutes information (clue’s in the name), and that information shapes the path of the avalanche. In the same way does information in the environment, though transmitted via electrochemical signals and not merely through contact (which honestly isn’t much of a difference at the microphysical level), shape the ‘avalanche’ that is a human being’s behavior.

You could imagine a robot, whose informational pathways are realized by balls rolling down tubes, flipping levers, opening up certain pathways and closing others. Everything a human being can do, such a robot can do; but I trust you can see that it’s not really any different from an avalanche: essentially, just a complex set of interacting parts pushing on one another. There’s no fundamental difference between this machine and the Galton box; so why does one choose, and not the other?

To try to hammer in the point that a choice is something that is, at least at the abstract level, independent of the agent. It’s something that the agent reacts to; a choice is a situation. This holds true even if the choice is instigated within the agent’s own thoughts; it’s still something that the agent deals with and reacts to.

Weird; to me it’s the google search for “choice definition”. I went to the search directly because google gives its summary of the definition, gleaned from I know not where, whose definitions were a bit more wordy and clear than the Merriam-Webster definitions.

Nonsense. Balderdash. Poppycock. Utter antifactual garbage. Look up the freaking definition of the word, for a start, and then tell me that when you choose between eating a cherry pie and an apple pie, you didn’t select or make a decision when faced with two or more possibilities.

You could tenuously argue that having a deterministic mind, that having a brain that is aware of and reacts to the fact that you’re violently allergic to cherries, that due to that you “didn’t have a choice”. The verbage there is tenuous because in this specific discussion the colloqual phrasing is a little deceptive (the phrase actually means you had a choice between options that you can’t imagine a circumstance which would lead you to choose other than how you did) but to claim that there was no act of choosing is utter and complete bullshit. So much so that I utterly refuse to even pretend that there’s any rational argument in existence that would destroy the meaning of words that way.

I mean, I’m not fond of the english language, but I don’t actively hate it.

Now this, this is an interesting question! You’re asking me to objectively define and quantify sentience! In a forum post! You’re not asking for much at all!

To make things easy on myself, and interesting for you, I will accept that the box ‘chose’ the outcome, based on the definition of the word ‘choice’. I concede that it’s a bit counterintuitive, but the simple fact is that there’s no hard line between the basic physics of things that seem random, and things which very obviously are explicitly recognizing stimuli and deliberately responding to it. The simple fact is that Galton boxes, computers robots, mosquitos, dogs, people, and dolphins are all at their core made of atoms and molecules which operate based on the laws of physics.

Honestly, just about the only thing necessary to define an ‘agent’ is to have a clear bound on what the agent is, and what it isn’t - we have to know which bits of reality are considered ‘outside’ the agent so we can determine what counts as stimuili and what counts as state. Galton Boxes are pretty clearly bounded, so it’s reasonable to examine how they behave in the role of an agent. And lo and behold, their response to ‘stimuli’ (being fed little balls through their hungry little mouths) is not simply random! After bouncing things around inside itself for a while, the box outputs the ball through one of its many, er, output holes, and favors the middle holes comparable with a binomial distribution. Or to put it in terms which will annoy you more, it ‘likes’ the middle holes better. (This kind of phrasing is suuuper common among computer programmers, who tend to work with a lot more decision-making agents than the average person.)

I’ve already expressed my intolerance of taking dumps on the english language, so I’ll just reiterate that the ‘so-called’ choices are so called because they actually are choices.

I’ve touched on randomity before, and will again. Randomity gives you nothing. Randomity does not introduce a magical agent that produces an extra point of magical sentience that is somehow separate from and magically independent of your personality, knowledge, beliefs, tastes, and preferences. What randomity gives you is noise in the system. And while there may in fact be some noise in your system, I can say with absolute rock-solid confidence that it is not a significant part of your cognitive process. I can say this with certainty because you are capable of typing grammatically correct sentences, rather than limply spasming on the floor next to your keyboard as the random impulses course through your muscles and mind.

A possibly interesting and relevant note: randomity exists in electronics too. I’m not talking about deliberate random number generation; I’m talking about minor fluctuations in the electric current. It’s something that every device and appliance has to deal with. And the devices that are trying to use the electricity as more than just power? They take steps to account for and mitigate the impact of this randomity. You’ve probably heard that computers think in terms of 0 and 1; this isn’t technically true. At the hardware level they think in terms of <.5 and >.5, while trying their best to keep things as close to 0 and 1 as possible. If the equipment wasn’t designed to deal with the minor random fluctuations like this, it would absolutely not work right.

I am dead certain that the brain takes similar steps to deal with random fluctuations (or pseudorandom fluctuations, deriving in unpredictable ways from deterministic physical reactions) and filter them from the system. And again, my certainty comes from the fact we’re not all spasming and flopping around on the floor like dying fish. You’re going to have a hard time talking me out of it.

Here’s the thing, though. I have never in my life seen a definition of “libertarian free will” that didn’t utterly and obviously conflict with observable reality and/or destroy itself by misusing the very words that make it up. That’s why I used the term ‘incoherent’ - they fail at the definition phase, to the degree that they literally don’t make sense.

Feel free to provide a definition of free will that holds up better. I would be very interested to see one - specifically, a coherent definition of free will that resembles human decision making and is incompatible with determinism via some method other than just declaring itself to be so.

Really? You have no problem with the idea that robots can have free will? Color me surprised!

Regarding the avalanche thing, let’s clear that up a bit. The avalanche itself is an event - the result of the mountainside’s ‘choice to collapse’, as it were. So the avalanche wouldn’t be the agent; the mountainside would be. And frankly a mountainside is a lousy agent, for the simple reason that it’s tough to define its boundaries - how deep do we go; do we count the air above, and for how high; do we count the plants and loose rocks sitting on it, and so on. So lets leave aside the mountainside for a moment.

The robot is better, simply because (presumably) it is self-contained with a clear definition of what is the robot and what isn’t the robot. It’s also clearly a steampunk robot, with tubes and levers, which is awesome. Especially since you said that it’s fully capable of acting like a human, with the ability to walk and talk and make errors on its taxes and so on. So yes, let’s use it as the example.

You, who I am presuming are a human, are not really any different from the robot. There’s no fundamental difference between you (and me) and this robot, save that maybe it’s more awesome than we are. Oh, sure, our tubes and levers and valves are a lot squishier than its are, and we resort to a lot more electrical operations than it apparently does, but at a fundamental level all three of us are physical objects with the capability of reacting to physical stimuli in ways that demonstrate reasoning and intelligent response. Via completely physical methods.

(This is your cue to argue for a soul or something - though fair warning, I will promptly argue that any souls we might* have are logically required to operate either deterministically or randomly - and again, dead fish.)
*we don’t.

Google searches don’t look the same on different systems; they’re influenced by where you’re accessing the internet, and probably a boat-load of tracking data google’s busy ‘not being evil’ with. So to me, the definition google grabbed happened to be from some pokemon wiki.

Right. So, for definiteness, let’s use dictionary.com. The entries there all refer to the verb ‘choose’. And the topmost definition of ‘choose’ is ‘to select from a number of possibilities; pick by preference’.

And them’s the breaks: there is no such thing in a deterministic world. Every so-called ‘choice’ is set fast from the beginning of the universe; a Laplacian demon could tell you what you’ll do at every fork in the road; the universe is like a movie, where there’s no question at all that the plucky young assistant is going to end up with the out-of-work soulful poet as opposed to the money-bag boss. There’s no choice being made in that movie: there is only one option that’s ever going to come to pass. There’s no possibility that the protagonist suddenly realizes that the boss has a heart of gold, and only acts callous and uncaring to protect their feeble heart, upon rewatching. That the poet and the assistant end up together is a certainty from the very first frame; and likewise, in a deterministic universe, that I’ll choose cereal for breakfast tomorrow has been set in stone since the dawn of time.

Alright, then, are you similarly going to accept that the stone I threw chose to fall to the Earth? After all, it could have flown off into space—exactly as much as the ball in the Galton board could have ‘chosen’ a different a different slot to end up in.

Not according to the definition, which requires for there to be ‘a number of possibilities’. OK, granted, ‘one’ technically is a number of possibilities—but it’s pretty clear that that’s not the intended meaning.

Agreed. Randomness doesn’t open up an avenue towards freedom, or choice, in any way, shape, or form. It merely means that the root cause of some outcome wasn’t determined at the Big Bang, but perhaps by a radium atom decaying ten thousand years ago, or just right now. Doesn’t change anything. I don’t exactly know why you feel the need to bring it up, but I’m glad there’s something we agree on, at least.

Actually, noise might be extremely important to cognitive processes—it serves to bring up new options. Picture something like a genetic algorithm: it’s only due to chance variation that new forms emerge. Noise injection is also used in the training of neural networks, for example.

See this post.

Why the hell would I? I’m a robot myself! OK, made from bone and blood and squiggly bits, and not from gears and levers, but a robot all the same.

An avalanche is a patterned arrangement of matter; likewise, human beings are patterned arrangements of matter. In a large enough avalanche, your entire life story will play out, with bits of rock and dust standing in for (that is, standing in the same relations to each other as) the cells and organelles, the molecules and atoms and protons and neutrons (if it comes to that) that make up your body.

You haven’t really read this thread, have you?

(Although I feel that I should point out that in my experience, it’s an unwritten law of the internet that whenever somebody tries to settle a metaphysical debate by appealing to a dictionary definition, the likelihood of any further substantive contribution to said debate decreases exponentially with each additional character typed.)

Well, dang, that’s suuuuper useful.

Me likey! I’ll take it.

BZZZT! Super-wrong. Overtly, explicitly so. Explanation forthcoming.

There are two answers to this: “So what?” and “You have to examine the mechanism, not the outcome.”

If you only survey the outcome then you’re not going to see any evidence of either free will or choice under any model, because the past is fixed. (I’m aware that the OP fuzzed about with the idea of an unfixed past, but I’m just going to stick with the axiom that the past is fixed.) It’s sort of like if you’re watching a movie about a poet and their assistant getting together, to pick a random example - watching the movie doesn’t give you any idea why that happened. But if you suddenly noticed in the credits it said based on a true story (and you knew little of modern moviemaking), you’d suddenly start assuming that originally, in real life, the poet and assistant did have free will, and chose to be together. This is because you know that there were actual agents involved, making the decision for themselves, rather than just some author making the whole thing up.

The mechanism of the cognition and the identity of the agents matters.

The “so what?” part comes simply because there’s nothing about a choice being predictable that makes it stop being a choice. The mere presence of multiple options being selected from is sufficient for a choice to occur. For an example, I happen to know my dad hates cherries, and is diabetic, so if my family members were presented with the choice of cheery pie and unsweetened peaches, most of my family might pick one or the other but since I know my dad’s preferences (which your own definition states are the source of the decision), then I can predict his outcome with certainty. And, by your own definition, he still made a choice.

What’s the agent? The stone? The stone is moving like that because it was acted on by internal forces. By your given definition it’s not making choices or exercising any sort of will.

Unless, of course, you’re going to define gravity as “the preference of matter to be near other matter”. This would be…odd. To say the least. But it’s a thought experiment so we can do anything! So if we do this, then we’ve suddenly granted the matter in the stone the ability to sense other matter at a distance, and then the ability to move itself via will alone, and based on its preference for company it will choose to move in the direction of other masses via it’s own power. Sure, it (and everything else) does this with extreme consistency, but that just makes them like a diabetic who hates cherries. Conceptually it still works; if we define gravity as deliberate action of the matter itself (rather then the work of pixies like it actually is) then the matter within the rock is indeed choosing its path through the sky.

But that’s weird and dumb so I’m just going to say that gravity is an external force and the rock is choosing nothing.

(I’ll note that I never considered the ball in the Galton box to be an agent. The box is the agent; the ball is food.)

Poppycock. There is only one outcome - by your model too - but “possibility” and “outcome” are not the same sequence of letters and do not mean the same thing.

Random noise, of the type that you could reasonably expect to be generated within the brain, would not produce new options. It would come in the form of tiny electrochemical variations, not another whole line on the mental chalkboard saying “suppose we forgot the pie and fruit and got a burger”.

There is the possibility, maybe, that random electrochemical noise could be captured by the brain to deliberately perturb the weighting of existing preferences slightly. If this is occurring, which I am not saying it is, I do not see it as making a material difference. Not only would your preferences between choices need to be extremely closely matched for this to make a difference, but if this is happening then it would not count as free will. Because the random particle action would be a source of information from outside of your rational mental processes, which would, being external, be a blow against the ‘freedom’ of it.

Of course it’s also clear that if there is this incorporation of random radiation to be used as the ‘coin flip’ in case of ties, it’s still not a significant factor in human cognition. We rarely have no preference when making decisions.

By points:

Logical Independence: As described, I believe you’re saying that you require the universe including the inside of the head of the agent to be super cool with all outcomes. This of course flies in the face of the term “choice” - the definition requires the choice to be made based on preferences. Preferences are part of the agent themselves and by definition determine the outcome. Thus, Logical Independence is logically incompatible with choice by definition.

If you exclude the agent’s head from the rule of logical independence then you’re just saying that the possibilities being selected from have to be possible, but that’s not interesting.

Computational Irreducibility: I’m not sure that this means what you think it does - it eludes me entirely how you think this can add anything. In all examples we’re free to consider the agent a black box - we don’t need to know what their preferences are for them to have free will. All that has to happen is for them to have preferences, regardless of how they’re stored (such as a pyramid of pins that favors dropping a ball out towards the middle of the box), and a choice for them to apply the preference to.

Suffice to say, I don’t believe that a person has free will unless and until we successfully map and model his brain waves.

Intention: Isn’t this begging the question? You’re saying that things with free will must have will. Well, yes, that’s a given - and if you can isolate the decision-making process from external forces then any will will meet the definition of free will, because it will be will, and it will be free.

I’m not thrilled with how you leap from will to forecasting ability, though. Things can definitely want things without having that. Consider the sliding scale of animals, from dolphin to human to dog to mosquito to bacteria to virus. Forecasting ability drops drastically as you go down the scale. Even so all these things act and react to their environment in ways that simulate intent of some kind - if just a functional intent to reproduce.

Summary: I don’t like this definition. The first part of it is self-contradictory by definition, the second is irrelevent, and the third is axiomatic with a dash of arbitrary limitation beyond the standard definition.

Okay then - in that case I don’t understand what you’re trying to argue. You equated the Galton Box with the robot (and the avalanche); if you are also equating the robot to yourself, then I guess we agree that Galton Boxes have free will.

Sorry, I misunderstood. It seems pretty dodgy to claim that the avalanche has “preferences” stored within its tumble of rock, and I’m not super thrilled with such a poorly defined agent. (At which point to things start being part of the avalanche? At which point to things stop being part of the avalanche? If it splits is is having babies? If it merges is having unbabies?) Also you run into the little problem that the entire impetus for the avalanche is external and it’s not acting under its own power; we’re bumping against that “matter is sentient and gravity is a choice” thing again.

While it’s possible to make at least a game effort to claim that any enclosed box or vessel that discriminatingly reacts to input a decision-making agent, I think I may have to insist on the box.

At a mechanical level the only possible sources for decision-making are from deterministic sources and random sources, for the simple reason that everything is ether determined or it’s not. We agree that randomity isn’t a source of free will. You don’t like determinism and mental calculation as a source of free will. As best I can tell, you’re out of options! Excuse me for leaping to the assumption that you were seeking a third option, because otherwise I can’t fathom where you imagine free will springs from. “Incomprehensibly complex processes” is a lousy answer, because comprehensibility is something that can be attained by examination, as had happened repeatedly throughout history. If free will was dependent on that, then there is no such thing.

In my experience, whenever people try to eschew definitions while debating, they’re neither debating nor even communicating.

If your position is such that it can only be argued without words that have no meaning, your position is incoherent and wrong.

The point is that under determinism, the outcome is fixed in advance. Before any agent is even aware of the options, heck, before any agent even exists, there is no question about what you’ll have for breakfast tomorrow. Hence, there are no options to choose between: the world is like one of those old railroaded games—it might look like there are two doors, but there’s only one you can go through.

I agree. But on determinism, there are no such multiple options. Again, at the Big Bang (at the latest), whatever you’ll have for breakfast tomorrow was already determined.

(Presuming you meant ‘external forces’ here) Yes, that’s the point! But of course, that’s as true for you ‘choosing’ whatever you’ll have for breakfast tomorrow: you’re acted upon by the environment, which causes certain changes of state within you, that itself are determined by the environment in the past. Every action you take has as its ultimate cause the boundary conditions of the universe at the Big Bang. It’s all down to external forces acting on you.

Take a stone tumbling down a hillside. Each time it makes contact with the ground, it’ll ricochet off in some new direction. Each of these times, it gets information about the ground, processes it—by means of various internal state-changes due to stresses, density waves, oscillations etc.—and then takes off into a new direction. The way it tumbles down the hillside is exactly determined by its own form (which itself is determined by external forces in the past) and that of the hillside. By the definition of choice given, it exercises none: there are no options; at each point, the direction it bounces off to is uniquely determined.

You’re like that stone (in a deterministic universe). You obtain information from the environment, via information-bearing vehicles in some way impacting your surface (say, photons hitting your retina, sound waves your ear drums, or the leg of a table your big toe). The information you get there is processed in some way, as dictated by your ‘form’ (the precise connectome of your brain, for instance), which in turn is dictated by external forces in the past. As with the stone, various internal processes occur—neurotransmitters leap synaptic gaps, neuron firing frequencies change, muscles contract, and so on—ultimately leading to a reaction that is entirely determined by the sum of (past and future) external influences acting on you. As with the stone: there is no choice here.

‘Sensing other matter’ is always just being subject to a force exerted upon you by that other matter, be it gravity, or short-range electromagnetic forces when you ‘touch’ something. And just as with the stone, all those forces ever do is change the state of what they influence; and the sum of those state-changes accounts in toto for every action you’ll ever take, which was thus fixed since way before you, as an agent, ever existed.

Then you haven’t understood my model. In brief, say the state of the universe at some time t is U(t), and the laws of the physics are L. Then, the outcome of a choice between A and B, on determinism, is fixed by those two pieces of data: {U(t),L} –> A, for example. Thus, the state of the universe (at, say, the Big Bang, but it doesn’t actually matter) and the laws of physics dictate that you’ll have corn flakes for breakfast tomorrow.

On my model, there exists the possibility such that {U(t),L} does not imply which of A or B occurs—the choice is logically independent. Hence, {U(t),L} –> A v B. It’s only upon the addition of the—irreducible—behavior X of some agent that, say, option A gets realized: {U(t),L,X} –> A. In this sense, X is necessary for bringing about A.

So let’s fix t at some point before you wake up tomorrow. On determinism, it’s then already decided what you’ll have for breakfast; the rail is laid, you just have to follow it. On my model, you’ll have to go through your morning routine, brush your teeth, shower, and so on, until you’ll stand before the choice of whether you’ll have corn flakes or ham and eggs; and only in the process of making that choice does, in the processing of information, does either of those get realized. Before that point, the state of the universe and the laws of physics are logically insufficient to decide which it’ll be; and it needs you going through the behavior X (or something computationally equivalent) to realize one of the options. Without X, the question ‘A or B?’ will not be settled on my model; on (ordinary) determinism, however, there never is even such a question.

It does, if coupled with a selection process, such as in a genetic algorithm, or indeed biological evolution.

Of course, the agent is part of the universe. But you can even go back to a time before the agent existed, and, on the usual conception of determinism, all will be fixed.

No, determinism is incompatible with choice: again, the choice of what you’ll have for breakfast tomorrow is fixed since before you ever were born, and thus, unless you’ll espouse some sort of reincarnation-model, could have any preferences.

On my model, there’s a point before the choice-making process X got started such that the universe did not pre-determine the outcome, and it’s during this process that preferences develop, which then lead to a choice between alternatives; and moreover, thanks to irreducibility, that process is necessary to realize one of the alternatives.

I’ve tried to make this clear a number of times now. It adds that there’s no way to ‘shortcut’ the process X by which a decision is made. A stone thrown in a gravity field can be computationally short-cutted: once you know the initial conditions of the throw, you can directly calculate where it hits. A computationally irreducible system can at best be simulated step-by-step: that is, you’ll always end up essentially making a ‘copy’ of the system that makes the choice, and have that make the choice.

So in my model, you can’t get rid of the agent; whereas on ordinary determinism, you need never consider any agent, and can just as well take the state of the universe from a time eons before the agent was born.

What intention does is basically to separate the possible behaviors X into two classes: those directed towards a goal, and those that aren’t. So, the combination of measuring apparatus + many-body system doesn’t have a goal, and hence, doesn’t have free will (although it does have freedom), while you, having, for instance, the goal of eating something savory for breakfast and choosing ham and eggs, do.

No. Will, wanting to do something, entails the intent to shape the world in a certain form, to bring about a certain state of affairs; thus, the agent must be able to model the consequences of their actions—if I do x, y happens. I want y to happen, hence, I should do x.

No. I’m saying that, on determinism, neither Galton box, nor robot, nor I have free will.

These problems exist for any agent: we continually exchange matter, and more importantly, information, with the environment. I breathe in and breathe out, receive signals, emanate signals. And again, the agent can be completely crossed out, in a deterministic world.

In the previous post, where I said ‘past and future influences’, that should have been ‘past and present influences’; I don’t want to argue for retrocausality.

The long posts in this thread make me not want to hang with it, and probably does that for others too. I’m not telling you how to post, but I would be more involved if we kept posts to a specific point or two.

I consider myself a determinist when it comes to the Free Will question, but I accept that because of quantum randomness you couldn’t make predictions like that. What comes out of our brains at any moment is pretty much deterministic, maybe a very small amount of quantum weirdness thrown in, but if you’re looking at the overall course of the universe, those little random events are like the Butterfly Effect. If an atom had decayed at a slightly different time, it could have given Hitler’s mom cancer before she conceived him.

Yeah; I generally have trouble letting what I perceive to be a misunderstanding or erroneous argumentation go, which I’m afraid can make my responses in these kind of thread seem a tad on the obsessively detailed side. I’ll try to keep it in mind, though.

Yes, but I think most of us actually agree that randomness does not add a nexus for freedom. If you randomly choose between steak sandwich or chicken soup, you haven’t made a choice that’s any more free than being destined to choose steak sandwich would have been; so randomness simply doesn’t add anything novel. At best, it moves the (Cauchy-) surface of data relevant for determining a given outcome further into the future; but the conceptual points don’t change.

Right. I’m one who says that we don’t have Free Will, and I don’t think it’s a good idea to redefine Free Will to be something else. Our brains run on physics, and that includes mostly cause-and-effect and some QM, which I lump together under “determinism” for shorthand.

Maybe it’s a non-analytic function!

Did I do that right? It’s been 34 years since I got out of college.

Well, I’ve made my case that even in a deterministic world (or one with some quantum randomness thrown in), we do have meaningful free will, which I don’t consider to be a redefinition of the concept—it conserves what I believe is most significant regarding the idea of free will, namely, the notion of agency, of being responsible for one’s choices in the sense that some option was realized because you chose it. On the garden-variety determinism mostly espoused in this thread, such a notion doesn’t exist: everything is fixed from the beginning of the universe, and whatever happens, does so because of the boundary conditions of the universe (with perhaps a sprinkling of randomness thrown in).

Sorry, that was a bit of an irrelevant aside on my part—I merely meant a sort of (generalized) notion of a moment in time such that knowing the state of the universe at that moment, its future evolution (at least up to some relevant instance of ‘choice’) is completely determined.

I too am highly vulnerable to “respond-to-everything-ism” - and I’m also extremely verbose! A one two punch. No wonder nobody wants to hang around me. I’ll try to contain myself a bit here though - helped by the fact I won’t actually quote anything!

Except this:

I’m hanging onto this because it singlehandedly cements the fact that choice is an inherently deterministic process. It is not random; choices are made rationally based on the preferences (that is, the internal state) of the agent. And similarly, determinism can’t possibly eliminate choice, because determinism is integral to choice.

And honestly, I’m not sure why that isn’t QED.

Half Man Half Wit seems devoted to the idea that, if at some point in the distant past the universe hadn’t yet nailed down enough information to predict all future states from via a long, slow, irreducible process, this somehow adds something. I personally don’t see how it does. In the short term, clearly humans minds operate largely or completely deterministically, because physics. So even by his own argument, if we look at say the last four hours, none of us would have free will in that time, because no random events would have significantly impacted our preferences and thus our decisions would be ‘fixed’. (Which is to say our preferences wouldn’t be whipping around willy-nilly, which I don’t see as a controversial idea.)

If free will can withstand being determined for four hours, how will being determined for a few hundred million years make a difference?

I’m not disagreeing with the idea that choice is deterministic, and as I’ve made clear a couple of times now, randomness doesn’t help, and the discussion would proceed basically identically if nobody ever though up the concept. But the key point is, again, that choice needs options; it needs either/or. On the usual conception of a deterministic universe, there just is no such thing. It’s like there’s two tunnels, and we’re on a train whose rails only enter one: there’s just no meaning to saying the train ‘chose’ that tunnel—the rails dictated where it would go.

To put it into a somewhat sloganized form, on both a standard deterministic and indeterministic account, one need never talk about the agent making a choice at all to sum up the reasons why a given outcome occurred. On determinism, one only need to refer to the boundary conditions of the universe, and the laws of physics: and why you’ll have corn flakes for breakfast tomorrow is fully explained—there’s no question left over. On indeterminism, it’s the same, except one may have to refer to the decay of radioactive atoms, or whatever else your randomizer of choice might be.

This isn’t possible on my account (which, really, doesn’t introduce anything new—some mysterious causal power, e.g.—, but just puts the—I maintain—proper analysis to the way we know the world is). Saying ‘because of the boundary condition of the universe and the laws of physics’ when asked ‘why did A happen?’ is simply false: the question is not answered by that. Whereas, I reiterate, on the standard deterministic account, this would be a perfectly sufficient answer, and A’s occurrence is totally explained by it (and mutatis mutandis if we’re allowing for random events). Instead, you must add another determining factor: whatever deliberations, or computations if you will, occurred within the agent’s mind in order to decide what to have for breakfast. Only upon adding this do we have sufficient data to determine the outcome.

So, the bottom line of what my model adds is that it adds the choice of the agent as a necessary and indispensable factor in determining the ‘why’ of some outcome.

Again, random events have nothing at all to do with it. We might just as well think about a universe in which none ever happen (and it seems that for clarity, perhaps we should). The human mind may operate completely deterministically; what is important is that this operation is an irreducible part of determining a given outcome—which it isn’t, once more, on standard determinism, which only needs—say it with me—the boundary conditions of the universe and the laws of physics to answer every question regarding the ‘why’ of things.

I’m confused - it sounds like you imagine that determinists think that you could walk up to the universe seconds after the big bang, casually glance around a bit, look under the underside of a helium atom, and read off of it “begbert2’s dad doesn’t like cherries.” That is kind of totally not how we see the universe.

Instead, we operate on the assumption that to get from there to here you have to run through all the distance between - or to put it another way, you have to carry out all that “irreducible*” stuff that you consider an integral part of free will. Planets have to form, people have to evolve, etc. And then when you finally get to the present, inside the physical chunk of the universe where his head is located, the deterministic calculations will take place that lead him to finally send signals down his arm to avoid grabbing the cherries.

That might sound familiar.

The thing that makes us compatiblists is that we stare unflinchingly at the mechanics of it and say, “Wait a second - the universe is (probably mostly) deterministic! And the mental processes are deterministic too! And we still manage to make choices based solely off of our internal preferences at the time, which seems to be compatible with a best-guess definition of free will! I guess determinism and free will are are compatible!”

(Compatiblists use a lot of exclamation points, apparently.)

  • I’m a little bothered by the idea that you think the mental processes to make decisions have to be irreducable. I mean, I know my dad doesn’t like cherries, so it’s not too hard to call whether or not he’s going to choose to eat cherries. Isn’t that me reducing things?