@Riemann has it. I agree with the position that free will, as normally understood, is basically incoherent.
And yet, there is something there, is there not? It certainly feels like we have free will; that we make choices that are at least somewhat independent of external input. How to reconcile that?
My view is that it basically comes down to it being impossible for a computational device (brain or otherwise) to have full introspection into its own behavior. That leaves us unable to reliably identify the origin of our choices, and so it seems like they could have been otherwise.
It’s an illusion, but it’s a really good one, because it’s essentially inevitable. A superintelligent computer will run into the same problem–it still can’t simulate itself, and so some of its decisions will be unexplainable.
True. People vary. I think I’d be more likely to steal (in a circumstance in which I was unlikely to get caught) if I thought I had no choice in the matter.
I can’t disentangle that sentence, or at least am not sure I’m understanding it. Are you saying that it’s possible to not believe in free will but still believe that one has a responsibility to keep social contracts?
Maybe it is, but in that case I’m not sure what’s meant by “responsibility”. If I have no control over my actions, how can I be held responsible for them?
(I can think of multiple reasons to not steal even though I don’t believe in God. But the ones that apply even if I won’t get caught and nobody’s even going to be sure that there was a theft – paper money could have blown down the drain or been incorporated into a bird’s nest – have to do with what sort of person I am. And, if I feel that I have no choice in what sort of person I am – why would it then matter to me?)
Why? It doesn’t exonerate you. There is no “you” that floats helplessly outside of your choices, you are the software that makes your choices.
There are still good people and bad people. Bad people are bad software that makes bad choices. Why would knowledge of the absence of contracausal free will make you any more okay with knowing that you are one of the bad ones?
Without free will, no one would be able to enjoy it. I believe free will and consciousness are inextricably linked. You can’t be aware of yourself if you can’t actually decide how you want to act. That decision is the first spark of self. Your awareness of self is the awareness of the decisions you have made, that are potentially different than the decision someone else would make.
For what it’s worth, there are plenty of experimental data showing that we make “conscious” decisions (decisions that seem to involve deliberation) seconds before we are consciously aware of making them.
Although personally I think these results are more interesting in what they tell us about how consciousness works, not in debunking free will. The problem with free will is that it’s not coherently defined, so it’s not as though it can be tested, it’s not even wrong.
Suppose somebody’s sliding off a cliff, and I’m trying to hold on to them from behind a railing to keep them from falling; but I’m not strong enough, I can’t hang on, and they fall to their death.
It wasn’t my fault; I just didn’t have strong enough muscles. I couldn’t help it.
Would that make me “one of the bad ones”? Because I had no choice to hold on, because my muscles couldn’t do it, you would think – and expect me to think – that I was a bad person?
If I genuinely couldn’t help being a thief – because I had no choice in the matter – why would that make me “one of the bad ones” if I did something I had no choice to do otherwise?
Of course not. You didn’t do anything bad. Trying to help someone to the best of your ability but failing is not a bad act.
Whereas a thief is bad because stealing is (usually) a bad act. It harms others.
Brains compute, and choices are computation. Brains that compute bad choices are bad brains. That’s all that “bad” can possibly mean.
I think that because you cannot deny that brains obey the laws of physics perhaps you are trying to rescue your strong intuition of “free” choice by going up a level, and imagining a meta-you that is free in its meta-choice of “who to be”, and that you could only be truly bad if you had made the wrong “free” meta-choice.
But there is simply no such thing as a “free” choice at any level, it is an incoherent idea. You are the brain that does the deterministic computation, not something that floats above the laws of physics in some incoherent way free of causation yet still causing.
Christianity and our justice system are based on this fallacy that we should only be punished for incoherently defined “free” choices, and that retribution for making the wrong “free” choice is justified.
But there are no “free” choices, only choices - computation. We should certainly punish bad people who make bad choices and harm others. But only for deterrence, or to segregate them for the safety of others. It is amoral to punish people solely for retribution, because this “freedom” is an illusion.
Depends on why, and on what we mean by “bad”. You use the example of theft. Let’s say one person is a thief because circumstance has placed him in a position where he must steal or watch his children starve. Another steals because she enjoys the thrill of theft.
Neither of them is truly free. The man is only stealing because a combination of his upbringing and innate biological instincts as old as placental mammals forces him to value his children above devotion to the law. The woman is only stealing because, by a quirk of her nature or her upbringing (or, more likely, a combination of both) she finds herself unable to resist fulfilling her perverse desire to steal. Had a different sperm with a different set of genes been the first to reach the egg in her mother’s womb, this combination of traits would never have arisen.
What does it mean for someone to be “bad” under this framework? I’d say it means they are someone whose “software”, to borrow @Riemann’s analogy, is “programmed” (through genetics, upbringing/experience, or circumstance) in such a way that it is likely to respond to standard situations in ways that are harmful to the individual or society.
What does that mean? Well, someone like our guy stealing to feed his kids is not in what we would want a normal situation to be, so the fact that he behaves in a way that is harmful to society is not necessarily an indication of a probelm inherent to that guy. Give him a secure way to feed his kids and the thefts stop.
(Or maybe he is now traumatized by what he has been through and cannot stop stealing. In that case there is an issue that follows him out of his circumstance, and that can be addressed with compassion by treating the root cause.)
Meanwhile our other example is stealing for the thrill of it. That’s clearly not healthy or normal behavior, but again, we should address it with compassion by seeking to understand the root cause. Why does she feel this way? Maybe she suffered a past trauma and this is her way of dealing with it. Maybe her parents never taught her to respect society for one reason or another. Maybe she’s a massive bigot and only steals from establishments owned by one ethnic group. Maybe her genetic makeup caused her brain to be literally incapable of comprehending certain concepts, one of then being property rights.
In any of these cases, we want to get to root cause, then find a solution. Maybe they need therapy. Maybe they need a job and a community so they can feel bound by a social contract. Maybe they need permenant paychiatric care.
It doesn’t, but that’s not the point I was making. The argument is typically that the illusion of free choice is adaptive, because if we didn’t have it, we would behaviorally deteriorate in some manner. But there are easy scenarios where we behave identically without that illusion, so that just doesn’t hold water.
Nowhere does this purport to prove that subjective states don’t modulate behavior; in fact, if the didn’t, there would be no need for e.g. the feelings of compulsion or external control I mentioned in the scenarios I gave.
Now, there is a broader Darwinian argument that can be raised against the fit between subjective experience and behavior, but that’s a bit more complex, and while it’s interesting, I don’t think it ultimately succeeds. But nothing of that sort is needed for pointing out the unsoundness of the evolutionary argument for the illusion of freedom.
This is getting it exactly backwards. What the examples of settings in which the illusion of freedom is absent without any impact on reproductive fitness show is that evolution has no handle, so to speak, on which to act to produce that illusion. At best, it is one out of a number of possible states yielding identical behavior, between which evolution can’t distinguish, and between which choice hence can only be a matter of chance. But this undercuts the argument that we have the illusion of freedom in order to increase our fitness.
Sure. I’m always motivated to point out flaws in an argument. It’s the only way to get better arguments!
Not sure what you think I’m projecting. As noted upthread, I don’t think that the question of free will is ultimately that impactful; we certainly don’t seem to only be able to enjoy settings in which we have agency, as demonstrated by the thrill of a rollercoaster ride. I was merely giving an illustration of the sort of thing one night have in mind when arguing that the loss of the illusion of freedom lowers reproductive fitness.
So do you think that retributive justice would be morally appropriate if we did have free will? If so, I can understand why you’re so opposed to the mere suggestion. But to me, it’s an abhorrent notion either way.
The point is that there are scenarios where whether doing something makes us happy or miserable simply doesn’t figure in whether we do it—like ‘alien body syndrome’, or constant compulsions, or a feeling of being possessed, and so on. So there’s no reason for the subjective illusion of being free to be preferred to these options. And that being the case, the reason we feel free just might be that we actually are.
We might not be, but the point is just that it doesn’t matter whether we are.
I don’t think anybody assumes this. The point is just that this is part of our internal computation that leads to a certain course of action (one we assume to be less conducive to reproductive fitness), and which, because of that, is not free.
This is an interesting point that I haven’t yet been able to find a robust argument for. In some way, it seems to me that there needs to be an opposition between the will and the world in order for something to appear in conscious experience. If we just went with the flow of things, so to speak, we would never bump into the world, but would be like Leibniz’s windowless monads without pre-arranged harmony, or Epicurean atoms without the clinamen—moving alongside one another, without ever really interacting.
I don’t think this follows. Just because evolution has arrived at scenarios where the illusion of free will does exist while also arriving at others where it does not does not mean that evolution cannot distinguish between strategies that result in free will and strategies that don’t. For example, it could be that complex reasoning and planning can be done both with or without the illusion of free will, but that allowing for it is more energy efficient for the brain than using strategies that disallows it. This would allow both situations to potentially evolve but still let evolution favor one over the other.
This is perverse. When I’m arguing that contracausal free will is an incoherent concept I obviously could not possibly think that it justifies anything. But for most of the world free will is the justification for retributive punishment. It is explicitly the basis for Christian judgement, for God sending people to hell if they make the wrong “free” choice; it is at the heart of our justice system. We punish people because we imagine they could have done otherwise.
If you can convince the rest of the world of that without debunking free will, I will stand by and cheer. But your personal enlightenment is not a good reason to ignore the rest of the world’s explicitly stated justification for retributive justice.
If you accept the obvious fact that subjective mental states can modulate behavior, then how on earth can you simply assume your desired conclusion that “there are easy scenarios where we behave identically” with different mental states?
This is just completely flawed logic. The existence of possible counterfactual settings does not prove that the subjective mental state “illusion of free will” is not adaptive.
My analogy with insulin is apt. We can easily come up with alternative settings, with molecular mechanisms that could have evolved to regulate energy supply that don’t use insulin. But we cannot simply assume that these would all be equally good ways. And even if we could show that there were (say) two other equally good ways to regulate energy supply, and that it was therefore purely a matter of chance which of the three ways was discovered and implemented by evolution, that does not imply that insulin is not adaptive as one of the possible equally good ways to regulate energy supply.
@Half_Man_Half_Wit ETA to be clear: I am certainly not claiming that we have any evidence that the illusion of free will is necessarily adaptive. I think it’s more likely that it’s a spandrel, an epiphenomenon. I think @Dr.Strangelove’s suggestion that it may derive from the fact that a computer cannot simulate itself is a compelling idea.
I’m just pushing back on your unfounded claim that it cannot possibly be adaptive.
The novel Blindsight even posits that it is maladaptive, as humanity finds out when it runs into a being that lacks this illusion and is more efficient as a result.
On the adaptive side, we can speculate that an illusion of “free deliberation” somehow makes us reason better. But there are more mundane ways that the illusion of free will undoubtedly has some effect on fitness in a cooperative species, since the belief that someone who harms you could have done otherwise is surely the primary emotional motivation for revenge. We have much stronger feelings about a human who harms us than about a wild animal. In cold evolutionary terms a strong tendency to want to inflict revenge could increase fitness, while being morally repugnant; or it could simply be maladaptive.
Of course, we also have to remember that in order for natural selection to operate, there must be heritable variation. It may be that the illusion of free will has some effect on fitness relative to a hypothetical state where we lack the illusion. But if the illusion is always present in any conscious being above some level of intelligence (as @Dr.Strangelove suggested might be the case), then the variation would never have existed, and it would not be adaptive.
To be clear, I wasn’t presenting the model in Blindsight as something that I think could actually exist - to be honest I’m a bit unsure about that. I did a lot of reading about philosophy and consciousnesses after reading that book, so at the time there was quite a bit that went over my head. I think I need to re-read it and see what I make of the alien being in a post-Chat-GPT world.
Me too. I didn’t comment on that specifically because it’s been over ten years. I remember I loved the book, but I’ve actually forgotten a lot of the detail of the plot. One of the advantages of deteriorating memory, you get to reread things without knowing what’s going to happen…