Yes, but it can be argued that the flip of a macroscopic coin can be purely deterministic, but a quantum fluctuation is not.
I don’t see how it makes a difference in how you make your choice. So long as you are unable to predict it, it is not deterministic to you.
I agree. It’s not an argument for free will, just an argument against having our will necessarily predetermined.
No. Did I say indeterminacy was free will? I’m pretty sure I didn’t.
And I’m saying it can’t. Digital computers can only approximate randomness and continuous analogue phenomena, regardless of how much power you give them. That only a very rough simulation would be possible, and that’s not what’s being proposed in the “determine all details of the brain’s state” type arguments.
It’s both.
A nonsensical question unrelated to anything I’ve said. Once again, I haven’t said indeterminacy is free will, or merely being an indeterminate system means having free will, because that would be absurd.
Like I did say, I think the question of determinacy isn’t really relevant to the question of free will, because I think free will is about agency. I do also believe human actions can’t be fully determined, but that’s a side discussion I responded to.
If you are a many worldser, and I lean in that direction, then the universe is deterministic. That quantum fluctuation follows the Schrodinger equation exactly. The fact that there are two of you, one following one predetermined path, and one following the other doesn’t mean that those paths are not predertmined.
There are random radioactive decays going off in our bodies and our brains constantly, and at least some of them may make an impact on an eventual outcome for a decision.
Randomness is not what creates free will. Free will is simply when we feel as though we could have made a different decision than the one we did. Whether or not we actually could have is irrelevant to whether or not we think we could have.
I took it that you implied that it was a necessary part. Just seeing where it breaks down and randomness becomes free will.
The can approximate continuous analogue phenomena to whatever level we choose and are able to simulate. If we are actually getting down to simulating the atomic interactions of the brain, then there is no longer any approximation involved. If you need randomness to have free will, a computer doesn’t need to approximate that, it can monitor the decay of a radioactive substance and get as close to randomness as the universe allows.
And I don’t think we need to get down to nearly quite the level of atomic interactions in the brain in order to simulate it to anyone, including the simulated being’s, satisfaction.
We’ll have to disagree here. I don’t think that there is much room for philosophy in the science of meteorology. Room for more and better data, faster and better computers, sure.
The weather has just as complex interactions with itself and its environment as the human brain does. Just as much randomness affecting its evolution. If such complexity and randomness creates free will in the human brain, why would it not create free will in the weather?
From what you have said about randomness being necessary to simulate free will, I’m not sure that that squares with saying that it is not relevant, not sure which side you are on there.
Anyway, what exactly is your definition of agency in your statement? Is it the ability to make a choice without being influenced by external factors?
Then it’s no longer just a digital computer, it’s a digital computer and an analogue randomness generator.So then we leave the realm of the purely computational.
Where did I say anything like that?
Yes. A lack of external coercion.
Can such a thing exist in the human mind?
Do you know what the word “external” means?
Thanks. I blame myself.
What about the “Trolley Problem”? You know, the hypothetical where you are driving an out of control trolley and have the choice to switch between two tracks - 1 with 5 people on it and the other with only one person.
You have “free will” in the context of one of two options. But your “free will” is still on rails, so to speak.
Although IRL, I don’t believe in a “lack of free will”. At lest not in any sense where the future is pre-determined and whatever you end up doing is the result of whatever mechanism determined that is what you will do (regardless if you think you thought of it or not).
I once saw a creative solution to this problem, though I am not sure it would really work: throw the switch after the trolley’s front wheels have gone down one track, forcing the back wheels to start down the other. Ideally this would de-rail the trolley and you would avoid hitting either group of people.
Of course you yourself might end up getting killed in the process, and undoubtedly the story-teller would then reply with something like “congratulations, you just killed a whole kindergarten class of children who were out walking beside the tracks”. Sometimes you just can’t win no matter what you do, causing me to believe that free will just doesn’t exist in those kinds of situations.
I think the trolley problem is taking us off track, so to speak.
As I’ve said, i think the whole concept of “free will” is silly, but even among those that believe it defines a meaningful concept would be quite willing to acknowledge that there are hopeless situations.
Indeed if we’re picking a hopeless situation to illustrate there’s no free will, it actually concedes the point, as it implies that any situation where there are one or more entirely “good” options would (somehow) be a demonstration of free will.
I don’t see how that follows (that free will doesn’t exist in those kinds of situations). As I understand it, free will doesn’t require that we be free to choose the results of our actions. If an action has unintended or unwanted consequences, that doesn’t mean we weren’t free to choose it.
If an evil genie or a monkey’s paw grants us three wishes, but sees to it that they are granted in malicious ways that we wouldn’t want to happen, that doesn’t mean that we weren’t free to choose which wishes were maliciously granted.
“Free Will” exists but isn’t practical because an organism’s prime directive is survival, and survival necessitates a restricted form of behavior. I have to work, maintain a domicile, eat a sufficient amount of food for required energy, behave in a certain manner to maintain my job and freedom, etc.
So, we have Free Will but cannot exercise it without restraint.
I do not know what “free will” really means, except of course the subjective feeling of “being able to choose”.
So let me propose the following thought experiment. Please comment whether it will discern the existence of such.
I sit in a chair in front of a display. Every minute it shows 2 images, one at the left and one on the right. The images can be anything: faces, geometrical drawings, numbers, etc. I must then press a “left” button if I “prefer” the left image and a “right” button otherwise.
However, the room I am sitting in is not a regular room. Invisible to me, its walls contains the latest, greatest, fastest state analysis machine. It will capture my complete internal state down to the last subatomic particle, with the best accuracy quantum mechanics allows, every microsecond.
Every minute the machine will be provided the images I am about to be presented with, and must quicky decide which image I will prefer. Only after the machine computes the answer, I am shown the (same) images to choose from.
It is to be expected that the machine will be right most of the time. Even a close acquaintence might guess my preference better than chance. However I require perfection! Even if there is only ONE discrepancy, it means that I have Free Will.
There is a conceptual bug I haven’t found a solution to: I changed my internal state between the time of the analysis and pushing the buttons. But surely the machine could make the necessary extrapolation, isn’t it so ?
So, in short, the question of whether such a machine, that will always predict my answer, could conceivably be built, is the equivalent question of me having free will.
The above constuct may be seen as trivial, but at least defines the concept of free will in a more rigorous way, I hope.
That depends on a certain interpretation of indeterminism (and by extension, of the role of physical law) that need not be right. Essentially, it stipulates that whenever the universe (presumably, the laws of physics) doesn’t exactly specify the next event to happen, it throws a coin and imposes that outcome upon its inhabitants—call this the ‘prescriptivist’ stance.
But now consider a game of chess: at any point, there will be a probability distribution over the next move made by a player, influenced by factors such as their skill. Certain moves will be excluded by the laws (the rules of chess), but they don’t uniquely determine the next move. But if the player has free will, then this setup alone does not constitute any impingement on their freedom—they are completely free to take either of the legal moves. The probability distribution is merely descriptive of what they are likely to choose; the indeterminism of chess simply means that there are options to choose between. So a combination of laws and indeterminism is perfectly compatible with free will (provided that’s a sensible notion in the first place, which I’ll get to).
To make this more clear, consider a simple toy universe with one entity which can freely choose between two options. That universe’s history can be written as one, perhaps infinitely long, bit-string. As long as its choices aren’t completely random—and nothing about the notion of free will demands this, as if I’m free at all, I’m free to always choose cornflakes over the peanut butter sandwich for breakfast—, then this bit-string will be compressible to some degree. That is, its information can be represented by means of a (typically short) algorithm, and a random seed. But that’s of course exactly the way the world appears to us: as a law, embellished by occasional moments of randomness.
So nothing about this can be in conflict with free will—in particular, claims that ‘science tells us there is no free will’ are entirely barking up the wrong tree: science tells us nothing at all about what makes events happen, it merely gives a descriptive account of their happening. The rest is metaphysics.
But then, that’s of course not the real argument against free will. Ultimately, it’s claimed, the notion itself is incoherent from the start—which, if true, of course makes any appeal to science, to determinism or chance, facile. After all, it’s supposed to be both definite—there is one, and only one, choice made—and yet, undetermined—as otherwise, whatever determines it by proxy determines the choice’s outcome—without being arbitrary—as a choice made without cause isn’t a choice at all. So, the ‘free’ and the ‘will’ parts of ‘free will’ just seem to pull in opposite directions.
But this misses the option that the will determines itself. If the reason why we willed A over B is just the will, then there’s no external determination, and no arbitrariness. Of course, this immediately collapses into circularity—as Schopenhauer put it, ‘a man can do as he wills, but not will as he wills’. We can choose whatever we want, but can’t choose our wants. Otherwise, we face an infinite regress: if my will0 makes me choose cornflakes over peanut butter, then some will1 must determine my will0 to be cornflake-wanting rather than peanut butter-wanting, and some will2 determine my will1, and so on, either to infinity and beyond, or until we meet a magical regression-breaker.
But then, what of it? Any account of how things happen ultimately faces the same problem. If things happen according to an unbroken causal chain, then either we have to continue this chain infinitely into the past, or we have to invoke some ‘first cause’ to begin it. If things happen as per the dictates of natural law, what dictates them, in turn? Randomness, similarly, hinges on ‘infinitary’ notions, although it’s not as obvious: mathematically, producing a truly (algorithmically) random string requires solving an undecidable problem (the halting problem), which can be done by a machine performing infinitely many steps of computation.
In the end, science leaves how events occur a black box; we habitually paper over this lacuna by notions such as ‘causality’ or ‘randomness’ or ‘natural law’, but all of those notions aren’t any more well-founded, conceptually, than that of ‘free will’. So at the very least, the matter can’t be settled on these grounds. In my opinion, what this tells us is just the natural boundary of our conceptual tools; that bumping up against this boundary tells us something deep about nature (e. g. that there is no free will) is then just human hubris, a confusion of the maps we make with the territory we survey, and even in a sense a certain sort of power of the former over the letter.
Two things
- No such “prime directive” exists. If any did, it would be reproduction not personal survival that was prime.
- Even if it did, humans very obviously override that directive all the time, from outright suicide to the slow suicide of self-abusive behaviours.
It can’t, and I don’t think “there’s no free will” is the right conclusion even if it could.
Firstly, on the practical thing: we’d need to somehow measure the states of countless molecules in the brain without interfering with their position or velocity. And, since such a complex electrochemical system is almost-certainly chaotic, we’d need to know that data with incredible precision for the state machine not to drift immediately.
It actually makes predicting what balls will fall out of a lottery machine look trivial in comparison and it’s not physically possible by our current understanding of physics.
But secondly, what if we do succeed? Well, what we’ve essentially made is a copy of “you”.
We’ve made a brain complete with your preferences, your memories, everything about you up to that precise moment. And then presented that brain with a situation identical to the one you are being presented with.
For me personally, I have no problem with the idea that if someone had the godlike powers to do all that then…it’s a fair cop, my choice of coffee or tea will be the same as Identical Mijin’s. It doesn’t bother me any more than hearing that an entity with the ability to time-travel looked ahead and saw what I would choose.
Well that would be my position.
In terms of resurrecting the concept i don’t see that that’s helpful.
It’s like if we redefine “vitalism” to just mean breakdown of ATP or something. We’d be using the word to mean something that wasn’t originally intended by the word and not what most people understand it to mean. Which could only cause confusion.
Or rather put it this way: if people understood and acknowledged the problems with the standard framing of free will: that it seems to require something both casually linked and unlinked to the universe, then I’d have no problem with advancing beyond that to a new idea of what free will could be.
But we’re just not there yet, and at this point i doubt this particular question will ever advance usefully (but may one day be relegated to an “angels on the head of a pin” question that people don’t think about any more after we have advanced AI, more detailed understanding of the human decision making process etc)