The difference is night and day, since a feeling in itself is good for nothing and, in this case, even the definition has been tied to a feeling.
But let’s elaborate.
We gain confidence in models as we use them to make verified predictions and inferences.
Gravity is used to make countless accurate predictions every day on earth: heck, you yourself have mentioned expecting a marble to accelerate towards the earth when you let go of it. So we can validate the theory here on earth, but it was seeing the same forces acting predictably on planets that was the real confirmation. Without the theory of gravity we’d have no reason to expect the planets to be following the paths they do.
Meanwhile this idea of free will…what inferences can be made from this model? And bear in mind, I am not “anti” free will, I’ll change my mind in an instant if you can point to some demonstration or at least good and non-circular argument for its existence. Just bring it.
Sorry, I was trying to be funny. I meant that it actually does sound like a different take. Not sure yet if it’s a convincing one, but it’s at least better than the circle I seem to be stuck in at the moment.
Nothing dickish about it IMHO. I’ve seen the phrase used fairly often, here and elsewhere, to indicate sincere appreciation. (It’s a Simpsons quote, for anyone who doesn’t know.)
I’m not sure that there are any differences in the predictive power of or inferences that can be made between “moronic free will” and “compatibilist free will”. To the extent that I think I understand @begbert2’s position, it’s constructed such that there won’t be any differences. My position is rather that “moronic free will” aligns with my subjective experience of the world, has sufficient predictive power that it allows me to live in the world, and leaves enough space around it for philosophy to exist. “Compatibilist free will” in a deterministic universe claims to align with the “reality” of my subjective experience of the world, adds nothing to the predictive power, and sucks all the oxygen out of the room when it comes to philosophy. By which I mean that if the universe is fully deterministic and our “free will” is merely illusory then there’s really no “reason” for anything. If all the outcomes are predetermined, life is a bit like playing tic tac toe. It might be diverting for a moment but ultimately, what’s the point? And even if I were to concede that it’s true, I expect that it’s maladaptive. That is, if you value “truth” over being able to actually live in the world you are likely to be outcompeted by creatures that hold the reverse set of priorities.
Let’s begin by acknowledging then that this is, at best, an argument of “You too!”
If you really thought there were specific things you could point to to demonstrate that this is a real phenomenon you wouldn’t start your argument like that.
What the hell does that mean? Predictive power doesn’t work like that.
I don’t get to say pixie dust exists because it’s the only thing that makes my life worth living, and I certainly don’t get to call that “predictive power”.
Yeah if you told a bee that it was predisposed towards seeking nectar it would, like, blow up or something. Because we need to believe our decisions are causally disconnected from the universe.
Well, it does for libertarian (“moronic”) free will. Not so much for compatiblist free will. Which is why I found it useful to point out that both the existence of the past and christian theology argue that the ‘worm analogy’ is operating in force.
Yay! Let’s talk about choice! And what it takes to make a choice!
“Choice”, again is defined as “an act of selecting or making a decision when faced with two or more possibilities.” The notable thing here is where it says “selecting or making a decision”. This requires an agent who is aware of the possibilities and capable of evaluating and selecting among them. Which is to say, an agent has to be given outcomes that are, if we exclude the agent’s decision-making apparatus from the equation, both possible, and the agent must be aware that they are possible and be in a position to evaluate the situation and trigger a different reaction based on its evaluation.
This is the common definition of choice/choosing as used by english-speaking humans.
Your “too simple” example wouldn’t even be a choice if you dropped a human, so clearly you’re just being a jerk by proposing it. And when the ball strikes the pin of a pachinko machine, what is supposed to be the decision-making agent here? The ball? What control does the ball have over the situation? How it is supposed to perceive its options, and how is it supposed to consider its choices and make one or the other happen?
Well, let’s be extremely generous and presume that the ball can ‘observe’ concussive impacts against it, in that there’s probably a slight physical compression of the ball when it impacts something solid. And it’s likely that, depending on the precise physical composition of the ball, and the precise texture of its surface at the point of impact, it might rebound in an ever-so-slightly different way than a different ball would if it hit the same pin in the same way at the same velocity. So one could very tenuously argue that during each individual impact the ball executes a tiny amount if individual reaction to the impact and ‘chooses’ to rebound in a way this is ever-so-slightly different than another ball would, or than itself would if it “remade the decision” while in a different state of orientation. So in this slight way you could say that a ball makes a tiny decision via its internal state when reacting to an impact. (And this decision will only have the tiniest of effects on the resulting direction - in most cases it couldn’t possibly effect which side of the pin the ball falls down from.)
But this is of course not what you’re talking about - you talked about the ball choosing its path through the complicated machine. The ball is obviously unaware of the machine as a whole - nine out of ten pachinko balls don’t even have eyes. It can’t possibly be choosing the path as a whole because it is unaware of it - it is only ‘aware’ of individual impacts, so only they are the subjects of the tiny ‘choices’ it makes. It is not aware of entire paths through the machine, and doesn’t have the physical capability to respond to different whole paths and choose them preferentially based on its state. But you already knew that of course. This was just another stupid gotcha - or are you really unaware that determinists can tell the difference between a human being and a pachinko ball and don’t think that they operate the same way at the physical level?
Ok. I think my position is that the arguments for “compatibilist free will” or against free will altogether are unhelpful. That is, even if they are true, they tell us nothing useful. And that they have some more tenuous implications that are undesirable. For example, I still have no idea how you get from there to holding people accountable for their actions other than “it was predetermined that we would hold people accountable so we have no other choice”.
Sorry if I was unclear. “The sun will come up tomorrow morning” is not a very strong or detailed hypothesis but it’s more than good enough to allow me to predict that I will need to wake again same as I did today. And it hasn’t turned out to be wrong yet.
I’m not sure I understand the analogy you’re trying to make. If you were able to induce a belief into a bee that convinced it that storing honey for the winter was a bad idea, that would be maladaptive and it would probably lose the genetic race to bees that do store honey. If a person holds a philosophy that holds them back from participating in the world and procreating (or passing down their belief through other means), that philosophy is unlikely to outcompete other beliefs.
This really seems like question begging. How do we define an “agent”? What does it mean to be “aware”? And particularly, what does it mean to “capable of evaluating and selecting” if the outcome of the evaluation and selection is predetermined by the state of the universe before the choice?
I’m really not sure why you feel the need to be uncivil in your replies. This is supposed to be Great Debates; please attack the idea, not the poster.
Yes, that’s exactly my point. How is a person faced with the decision between a ham sandwich and a turkey one supposed to consider its choices if the outcome is predetermined by the state of the universe before the choice?
I am unaware of that. How are they different?
Or how about this hypothetical: we drop a marble with a camera in it through a pachinko machine. Then we take that video and implant it as a memory in a human being so that the human believes themselves to have moved through the pachinko machine rather than the marble. The human’s brain post fact rationalizes reasons why they “chose” the path they took through the machine. Now did the marble “choose” its path? There now exists in the world a thought process that explains how every time the marble got to a peg, the choice of left or right was evaluated and decided. But the outcome of each choice happens to have been determined before the rationale. Am I getting closer?
It seems you don’t understand how determinism works. Still. So I will attempt to describe it simply. Again.
Determinism is when things react to things in some kind of causal manner. All systems where things react to things in some kind of causal manner are deterministic, to the degree that they are reacting like this. (Which is to say, the degree to which their actions aren’t wholly uncaused by anything at all - the degree to which they aren’t wholly random.)
If somebody punches you in the face, and you react to that by wanting them to be jailed because they punched you in the face, that’s determinism - the past action they did is having causal impacts on your current choices.
Of course humans are complicated entities that are consciously aware of many things and unconsciously aware of many others. So we usually don’t only make “punish them/don’t punish them” decisions based on a person’s past actions alone. Those actions exist within a history and social framework and personal thoughts, feelings, and morality that each play a factor in determining what we do.
Again, all factors that have causal impact on our decision-making process are determining factors. To the degree that the decision-making system’s results are influenced and dictated by the knowledge and beliefs and opinions and feelings in our minds, to that degree the decision was deterministically made.
The only things - the only things that are not deterministic factors are pure, unvarnished randomity. And it if it is not purely random then it’s actually a mix of determinism and randomity - if something only veers left and right but not up and down, then something is determining that it doesn’t veer up and down. If a distribution follows a bell curve of probability rather than a flat probability curve then something is determining that too. In practical terms anything that is actually random also has determining factors limiting and shaping the way it is expressing its randomness.
So, what separates determinists from everyone else, and deterministic models from other models? Simply that determinists don’t believe that there is actual randomity in the system at all. (Or maybe that what randomity there is is somehow limited such that it has no causative impact on anything.)
But even if the world isn’t wholly deterministic, it is still certainly partially deterministic. And it is still a fact that everything that matters about the human will and decision-making process -our beliefs, opinions, and feelings and the like- operates in a deterministic way. Because all that remains besides deterministic factors is pure randomness, and while pure randomness could impart unpredictability, the degree it does so is antithetical to human will and actual decision-making.
For an example, consider that if you happened to love someone and by no means want to hurt them under any circumstances. If randomity existed in the decision-making process, in theory it could perturb your decision-making processes enough for you to murder them for no reason, against your own will, and against every conscious choice you have made.
And libertarian free will offers this as a feature.
Under determinism the outcome is NOT predetermined by the state of the universe before the choice. It is determined BY the choice. The process of making the decision is an integral part of getting from point A to point B.
You are arguing that the fact that we get to point B proves that we didn’t do the very thing that got us to point B. It’s incoherent.
You might as well say that when somebody takes an airplane from Detroit to London that the fact that they ended up in London proves that they didn’t take the airplane.
Oh yeah, and you wanted me to define some things. Apologies if I’ve already said all this before:
Aware (of X): Capable of detecting X and reacting to it by changing its state or behaviors based on it. (Note that “aware” as in ‘self-aware’ is a different concept.)
Capable of evaluating and selecting: Evaluation and selection requires a higher level of data processing, where options are identified in the abstract and evaluated via a series of internal state changes based on process and heuristics. Computer programs and humans are capable of this; your average marble is not.
Agent: Something that is capable of evaluating and selecting.
Fine - it was a deliberately absurd hypothetical presented solely as a gotcha, which your own position would also fall prey to too, which thus can’t possibly have been presented as honest argument.
You’re not getting closer, because you’re conflating the ball and the pachinko machine as a whole. And also your understanding of how balls go through pachinko machines seems to be pretty dicey.
If the human had the experience of the ball, then they would remember a series of sharp impacts, mostly against one ‘side’ or another relative to their direction of movement. They might post-facto rationalize how strongly they rebounded, and the degree to which they gripped the peg and effected their rotation, but for the most part their experience would be analogous to if you tied up a human, dumped them in a raft, and sent them downriver through rapids, with only one big toe sticking out to maybe nudge them one way or the other if the toe happened to be the part of them that hit the rock. Pachinko balls have no arms, after all. Let me repeat that. Pachinko balls have no arms. So when a pachinko ball hits a peg in any way other than dead center to its direction of movement, nothing about its bounciness or friction can change which side of the peg it falls down. Our human pachinko ball would be aware of this lack of control over what was happening to it, and wouldn’t be able to post-rationalize that it had chosen the direction of fall for virtually all of the impacts.
It’s also worth noting that in a first-person perspective as a pachinko ball going through the machine, it wouldn’t even be aware of the path as a while, so it won’t post-facto conclude it had chosen it. It would just feel a bunch of impacts against it, and then it would find itself in a cup at the bottom. It wouldn’t have any way of even being aware that there were other cups it could end up in. It certainly didn’t choose the one it landed in, in the sense of selecting it as the most preferable of the cups available (which, again, it didn’t even know about.)
Now, if you for some reason wanted to bring the scenario a little closer to actual human decision-making, you could consider the machine as a whole to be the person’s mind. The pins would represent facts and information and mood and opinions within the person, and the ball would represent a ‘train of thought’. The train of thought would bounce off of the ‘pins’ on the way through the machine, though in this scenario the location, friction and bounciness of the pins would vary based on the nature of the part of the mental state they represent. If the person was very angry, for example, the train of thought could be bounced over to the set of pins representing information and thoughts associated with anger. If the person was only slightly angry it could be bounced over to a less-angry set of pins. And so the ball of thought would would rebound off of various thoughts and feelings and beliefs on its way through the machine, and eventually land in a ‘cup’ representing the culmination of the decision-making process - the final decision.
In doing its post fact rationalization about its decision, the machine would actually (since this is a new train of thought) run another separate ball through the machine, this time bouncing off pins representing its beliefs about its own decision-making process, and pins representing specific memories made during the decision-making process. It’s unlikely that the mental review of the past decision will accurately re-determine every pin involved in the previous decision - there are many pins in the machine, some of them less known to the machine than others. So the post rationalization might be close to how it actually decides, or it might be completely wrong.
Again, this would be a way to actually use the pachinko machine as a good model of human thought. But what’s the fun in that, right?
If there were this “free will” thing, then punishing people for their actions (which is what contemporary criminal justice is primarily focused on) would be highly ineffective at best, because, upon completion of the punishment, the actor is then free to repeat the offense. This is, in fact, what we have historically observed, so “the consequences of their actions” do not seem to be of much, shall we say, consequence. Which could, I suppose, be an argument in support of “free will”, although it could also support determinism or compatibilism or some form of the Skinnerian model, or some other model entirely.
Some of the failure of the criminal justice system lies in its implementation. There may be a justice system design that could have a higher success rate (e.g., one which strive to connect the actor more directly with the effects of their actions, which could have broader socioeconomic applicability beyond just criminal justice): it is entirely possible that the “free will” concept is a net negative as a foundational element for justice.
I wanted to revisit this, because there’s detail to be plumbed here and this type of discussion is interesting to me.
A reasonable person would look at me talking about pins that move around and change their physical nature and say, “Whaaaat? That’s nothing like either a pinball machine or the neurons of a human brain! You’re a madman and your mom dresses you funny!”
To which I’d say you’re right! (Except leave my mom out of it; my wardrobe is my own fault). What’s actually going on here is more complicated. To continue speaking in terms of a pachinko machine, what looks like a single pin (and would be viewed in a post-hoc rationalization as a single pin) is actually a collection of smaller pins all grouped together, with each of the smaller pins having a fixed and unchanging location, bounciness and friction. A ball that ‘hits’ the larger pin actually enters this maze of smaller pins and bounces off a bunch of them in series, back and forth until it shoots out of the group in some direction or another. Even a slightly different path into the group would cause it to butterfly-effect around inside and could result the cumulative effect of the smaller pin impacts resulting in a dramatically different speed and trajectory on the ball coming out. And all of this would be invisible to the mind when it self-reviewed the process - at best it might have a general idea of the overall state of the pin group (“I’m very mad right now”) but the precise workings of it would be unknown to the review.
There’s another thing to keep in mind about the cognitive pinball machine (as if it weren’t complicated enough), and that’s that it’s massively concurrent system, with multiple balls, many many balls, being shot through it at once. Many of the balls don’t even leave the system, but keep bouncing around back and forth within the system without end. And these balls can impact one another, and other balls bouncing around in a pin group can change how your ‘train of thought’ ball rebounds off of it. So you might have one ball (or many balls) bouncing around in your “emotion” pin group making it ‘more mad’ in how it redirects other trains of thought that impact it. Which matches how the different trains of thought and active emotions can linger and impact thoughts and decisions happening around the same time.
It’s all wildly complicated and nigh-unmappable, which probably explains why we haven’t done a great job of simulating the human pachinko machine yet. And that’s with all of it operating completely deterministically.
I saw a movie where someone chose an item from a menu. The second time I saw the movie, I knew it was predetermined which item he would choose. Nonetheless, he successfully chose an item from the menu the second time.
If you don’t know what the predetermined output is, your brain still goes through the motions of selecting, because that is what brains do. That is what they have to do. They can’t just say “I know this has already been figured out, so just skip to the next part”.
As far as the usefulness of the model - you might be less inclined to punish or blame if you believe that another’s actions are due to genetics, experience/training, or randomness.
But make no mistake, actions have consequences. Those consequences provide updated inputs which influence people’s behavior, even if they have no free will.
We understand a great deal about how the brain functions – not everything of course and there are still massive mysteries to solve – but we can see that it is a complex neural net that is also affected by the presence of various neurotransmitters, the regions of the brain that store memories etc.
Compatibilism should be the very obvious interpretation we should come to about our decision-making process: we can see that it’s a physical, broadly deterministic process but we also see that the brain is the crucial element, it does not receive commands from elsewhere.
However, in our world, there’s this whole baggage of “free will”. An incoherent mess coined before we understood anything of neurology that now infects discussions of decision making. So it is necessary for people like begbert2 to need to explain Compatibilism as a contrast to that.
“The sun will come up tomorrow morning” is a good example of a readily-testable claim.
Now do the same for free will. That’s what I was asking you.
What a ludicrous framing and understanding on your part. Since there are Compatibilists with children, and since me participating in this thread is a form of “passing down my belief”, I guess that ends that, right?
Bravo for this. This little passage gets to the heart of why I believe in libertarian free will.
I’ve followed along with the discussion Lo! these many posts and been continually frustrated by the duelling definitions. I’ve done this dance with Begbert before and had no desire to repeat the performance but it was worth ploughing through all the muck to get to this bit.
I read a fascinating essay recently by a philosopher who believed in a fully determined universe (decidedly not compatibilism) and who interviewed all the other determinists that she could find to see how they lived their lives. She found that not a single one of them lived their lives as though they believed their own words. When it came to everyday living, they all behaved as though their decisions were meaningful and had consequences.
She decide to give up decision making and to let decisions just happen to her. She has a lovely description of what it is like to wake up and say, in the third-person, “I wonder when she’ll decide to get out of bed.” Her experience of living what she considered to be an authentically determined life was very different — according to her — from the lives lived by people who merely talked the determinist talk without walking the determined walk.
I don’t recall if I responded to this, but the simple response here is:
In a determinsitic/compatiblist universe, we have the ability to predict based on the assumption that the future is caused by the past. Predicting people’s behavior is limited by the fact that we have limited access to information about their mental state and experiences, but with what information we have we may make predictions with reasonable confidence based on prior behavior.
The “moronic” model has somewhat less predictive power than the deterministic model does, because it just uses the deterministic model’s predictive models while also saying it’d distinctly possible that anyone might randomly start stabbing their family while sobbing and screaming that they don’t want to be doing that. (Or however it is libertarianists think the magical part of their free will expresses itself.) While leaving enough space around it for fantasy to be entertained.
I wonder whether the philosopher was interviewing compatiblists, or other people who (like herself) also didn’t understand how deterministic universes work. For in a deterministic universe decisions are indeed meaningful and have consequences. That’s sort of the whole point of a deterministic universe - that the decisions you make are an important part of determining how the future will go.
The “determined walk”, as described here, sounds a lot like “being a moron and lying to yourself a lot.” (Which, to be fair, I think describes a lot of philosophy.) Who does she think is deciding to get her out of bed, if not herself?
Gonna try to respond to a bunch of things here. Apologies in advance if I miss something or don’t get to you.
This seems to me like a massive excluded middle. No one (as far as I am aware) is arguing that the past has no influence on the future. But I took you to be arguing (and I haven’t seen anything to contradict) that the past is determinative of the future. That is, for any given state of the universe at time T, there is one and only one possible state of the universe at time T+1.
How does probability fit into such a model? If I understand you correctly, in a deterministic universe “probability” is just down to a lack of information. That is, we say a coin flip is 50/50 because we don’t know enough about the initial conditions of the flip. But if we knew exactly how hard it would be flicked and at what angle and the wear on the coin and the speed of the cross breeze (etc.) we would know that the flip would definitely land on heads. Is that right?
Therefore, if I could know everything about what’s in my own head and the position and momentum of every particle that will interact with my brain right up to the point where I have to choose turkey or ham, I could, in principle, know which I will choose. Right? So, if I work out that based on that perfect knowledge I will definitely choose ham what happens if I change my mind and choose turkey instead? Am I not free to do so?
There is a real paradox here that I’ve seen elaborated better elsewhere. Sorry for my lame version of it.
And aside from the logical paradox, this doesn’t seem to comport with how we believe the real world works since the advent of quantum mechanics.
Well, that at least does seem to comport with how the real world works. In my experience, people do a lot of random shit. But I’m pretty sure that’s a straw man with respect to free will.
Ok. Now I’m back to square one again. The state of the universe is determined by the choice? Where did the choice come from?
I am absolutely not arguing that. I have no idea what you’re on about.
Ok, cool. Those are basically the same definitions I would use. I think where we’re running into trouble is this: as far as I can tell even the most complex evaluation and selection tool, if its output is knowable in advance, is just a marble falling with extra steps.
On the other hand, if the most you could possibly say, even with absolutely perfect knowledge of the exact state of every particle in the universe before the agent chooses is “Well, it’s very likely to be X, and it might be Y, and it definitely won’t be Z”, then we’re onto something. That wouldn’t be predictable nor entirely random.
You’re fighting the hypothetical here, which admittedly was not the world’s best hypothetical.
Let’s try a similar one with fewer complications and (I hope) a bit more clarity.
We have a series of volunteers watch us hold up a marble and release it.
We tell the volunteers (falsely) that they are controlling the the marble and they can choose whether it falls or hovers by shouting “fall” or “hover” before we release it.
We wait for someone to shout “fall”.
We ask ourselves - did the physical system that encompasses the marble and the human “choose” that the marble would fall?
My answer is “of course not”. The choice wasn’t free because we (the experimenters) put our thumb on the scale. Even though an agent went through a process of evaluation and selection, we knew the answer in advance. The marble/human system was never free to make the marble hover.
I’m not sure that the ineffectiveness of our existing penal system is an argument against free will as much as it’s an argument for the venality and ineptitude of the prison industrial complex.
Right. I think it’s the case that people in general respond better to incentives than punishments. And some people have difficulty in evaluating the consequences of their actions. We could do better. But without a belief in something like free will, why would we even try (other than that we were predestined to try)?
I think I agree with most of that. But I would take it further to say that is isn’t just “nigh-unmappable” but actually in principle unmappable and full of quantum weirdness such that it is neither fully determined nor random.
Did he? I don’t believe the recordings on magnetic tape (or laser disc or what have you) actually have the agency to choose.
I choose to call that a victory.
“The sun will come up tomorrow morning” is a sad and flabby hypothesis. The sun does not in fact “come up” the Earth turns and the bit that I’m on is placed in the path of its radiation. It’s somewhat testable in that if I only mean tomorrow March 23, 2021 on a specified calendar on a specified point on Earth, I can wait and see. But it’s not fully generalizable. Nevertheless, despite all its flaws, it’s perfectly serviceable for my life.
My belief in free will allows me to predict that when faced with choices I will be able to make them and that there is more than one possibility open to me in my life. Let’s take an example where we would both agree that there are no alternative possibilities open to me in my life: I’m falling fast and about to hit the ground with a force that will surely kill me. That doesn’t sound like a happy place to me. If I were to live under the assumption that there was only one possible path through my life, I believe that I would be equally unhappy but for a slightly longer time.
If Compatibilists reproduce more/faster than people holding other philosophies and/or spread their beliefs more effectively, they will outcompete others. If not, they won’t. I don’t see what’s ludicrous about that. All I can say is that for me not believing in free will would be an impediment to passing down my genes and/or memes. If I’m unusual, so be it.
None of that is surprising to me but thank you for clarifying your position.
Given the quoted part above, is meaningful communication possible? Or are we all really just talking to ourselves and any meaning that happens to inhere in some other creature overhearing is just coincidence?
Thank you.
Ooh! I predicted this answer correctly. I might actually be learning something about what you’re trying to say!
Do you ever get itchy from all the straw flying around after you hack away at it like that?