Sorry your analogy is a complete fail. Give it up. You think you picked a Bugatti but you’ve picked a booger instead.
Taking my current empiricist position, no I am not. I am considering that what is being called “choice” is somehow actually more “attribute,”. … a dimorphic trait. Being in the group with the trait to pick just one is strongly correlated with the outcome of highest reward.
But even if it is that there is a sleight of hand like a magic trick … I would prefer to be in that group.
And again the B box was full of one million dollars long before they decided to leave box A. Everyone takes box B, you are either a millionaire (or not) from the moment you decide to take part in the experiment, you just don’t know it until you open box B. There is a correlation but not causation.
The evidence indicates action #1 (buying a Buggati Veryon) produces result A (being worth 10m dollars) and Action #2 (not buying a Buggati Veryon) produces Result B (being worth less than 10m)
We know the million dollars is in box B before box A is left. We don’t know all those Buggati buyers were worth over 10m before they brought it, maybe one of them turned their Buggati into a billion dollar tiktok influencer career.
Your choice is controlled and constrained by prior events, your state of mind, etc.
Those things can change what is in the box, because the computer can look at them because they’ve already happened.
Your choice does not change what is in the box. But you cannot make the choice to take two boxes without prior events making that predictable.
Even if we assume that the universe is not deterministic and that quantum events or libertarian free will mess up long term predictions, this is like saying that microscopic imperfections in the surface of a billiard table impact the path of the ball. They absolutely do. You can measure that impact. But the fact that this impact exists does not prevent skilled billiard players from predicting where the ball will go was very high accuracy. Most of the time, there’s microscopic imperfections either don’t matter, or average out over the length of the table to where the ball still ends up where the player expected.
David Hume demonstrated in A Treatise of Human Nature and An Enquiry Concerning Human Understanding that it’s basically impossible to prove a causal link in any part of life because you can never quite get at the Necessary Connection. All we have anywhere is a very strong correlation; a causal relationship is actually impossible to prove.
You can’t say buying a Bugatti Veryon produced the result of having ten million dollars unless you can show evidence that people acquired ten million dollars after buying a Bugatti Veryon.
But this Bugatti Veryon issue is besides the point. Are you going to answer the question I’ve repeatedly asked? What makes you different from all the other people who were in the same situation and took two boxes? Why will you get different results than all of them did?
On the subject of the “accuracy” of this computer: 1000 correct evaluations doesn’t mean very much. Even a very accurate model is going to fail on a long enough timeline when you’ve assigned it to a complex enough real world problem. Part of my job is improving an email system that detects threats. It’s very accurate, but it still does not catch everything. Its false positive and false negative rate are very low, though. If you pumped 1000 random email messages through it, it would probably score 100% accurately on that set. But once you’ve put 1 million messages through it, you’d start to see your actual false positive or false negative rate.
So it’s not that I think I’m special. It’s that I know computers, and I don’t think that computer is special. It’s eventually going to be wrong, and maybe I’ll score as a false positive as a one boxer. My decision now doesn’t make any difference anyway, grab both.
And you’d be correct that your decision in the moment did not make any difference, because, in this thought experiment you are as programmed as any computer and the computer was able to predict what your output would be even before you ran your lines of wet meat code. Therefore there was nothing in the opaque box.
Again easier for me to think of it as a trait like smelling asparagus pee.
You think so? Remember, the computer as described so far has an equal chance of generating a false positive or a false negative. If it goes on long enough, I am fairly certain eventually there will be a one boxer that gets nothing.
Also, I explained this scenario to my wife and asked her if she thought I would grab both boxes. Even after I told her I thought two boxes was the correct strategy, she said I was too mercurial to predict in that moment. Sometimes I make fun of the lottery, and sometimes I go and spontaneously buy a ticket. Then I probably don’t even check the results.
Plus, I don’t need a thousand bucks. I technically don’t need a million, either.
Correct that so far all we know is that its false positive and false negative prediction rates are at most very very small. And yes if at the last second a stray neutrino hit a critical synapse at just the right time, and you chose one box, it would still give you nothing in the box.
The action can’t possibly produce that result unless it can change what’s in the boxes.
Those are both basically ‘no free will’ arguments.
If I have no choice as to how many boxes I take, then there’s no sense in telling me how many I should take. Or indeed in posing this sort of puzzle at all.
No, they aren’t. The computer can be only 75% accurate (on average, with no statistically significant correlation between the number of boxes picked and the chance of an accurate prediction, etc) and it still makes sense to pick the closed box.
Just because your decision is predictable doesn’t mean that you don’t have free will. If I had a list of your last 300 meals, and the last 300 meals of a few thousand people, I could very likely use an algorithm to predict reliably whether you’ll pick Chinese food or Mexican when presented with both options. If I predicted right, that doesn’t mean you don’t have free will.
The essence of the hypothetical is “how many boxes do you choose?” Not, “How precise do you think a computer might be?”
No matter how precise a computer’s prediction, once you enter the room the dough in the boxes is fixed. If the computer has precognition (which I think the hypothetical makes plain), you choose one box. Why? Because the computer literally “saw” what you would do.
If the computer does not have precognition (which is ridiculous, given a thousand correct predictions in a row), you choose two. Because nothing you do at that point will change what’s in the boxes, so you might as well take both. Doesn’t matter if the computer without precognition has 99% accuracy. Just doesn’t matter.
That’s why I’m a two-boxer in what I believe is an impossible scenario—a computer with infallible precognition.
Speaking only of my take - it neither accepts nor denies “Free Will.”
I am offering a group of two year olds a cup of ice cream or a cup of Swedish salt licorice and I correctly predict they will choose to eat the ice cream. Does that mean they had no Free Will?
I know in advance who has the TAS2R38 gene that correlates with extreme bitterness taste sensitivity and therefore aversion and correctly predict that that 25% will choose the vanilla ice cream to the dark dark chocolate gelato for dessert. Did either group necessarily not have Free Will?
The program has, somehow, admittedly unbelievably, identified who has the attributes that go along with choosing one or both boxes in the precise circumstance of this game. You are making your choice. As much Free Will as ever. If ever. It just knows what it will be. Your “tell” showed before you entered the room.
Yup. And I’ve said in this thread (post 201) that if the computer’s using precognition, then it’s going not by anything in your history but instead by what you’re actually going to do, so in the case you take only the opaque box.
I just went back and looked at the OP; and the OP doesn’t actually specify whether the computer’s using precognition or previous history. I had remembered it as specifying previous history; possibly because there have been people arguing in this thread that the problem is people not believing that a computer could know enough about them to predict their reaction from past history. That sort of argument is totally irrelevant if the computer’s using precognition, because in that case it doesn’t need to know a damn thing about you; it only needs to know what is going to happen in that room. And if it does know, while it’s filling the boxes, what you’re going to do based on its being able to see the future, then you need to take one: because it’s that future act that caused it, in the past, to fill the opaque box. Of course, that makes a much worse mess of everything we think we know about how the universe works than a computer that only shows me ads for things I might buy. (While not having them blink or bounce; because one of the things current computers obviously don’t know about me is that the moment the ad starts moving I look away from it,)
And no matter how much it knows about you, if it’s not using precognition, what you do in that room can’t change what’s in the boxes; so in that case there’s no harm, and a possible advantage, in taking both.
And in that case, you take two, because what you do can’t change what’s in the boxes. The tell it’s going by has already happened.