Are you a One-Boxer or a Two-Boxer?

Heck, suppose that right before you make your choice, I tell you the predictor picked B. Then clearly you should switch to A & B, right? (Assume I know the prediction and am 100% honest.)

Suppose I tell you he picked A & B. Again, I’m not lying. You’d still want to switch, right?

So no matter what I tell you, you should switch. So why did you need to wait for me to tell you?

Why would it preclude you having free will?

If there is a train running down the tracks, even a lousy predictor can predict that I won’t try driving my car across the tracks. And even let’s me know of the prediction. That I don’t drive my cars across the tracks to spite the prediction doesn’t preclude free will at all.

It seems you don’t believe such a predictor can exist in the first place, so you try to treat the predictor as a telepath rather than a prognosticator.

Because you don’t know what he predicted. Whatever logic you jump through to switch or not switch your choice, you don’t know what his prediction was; and according to the premise in the OP he will always* guess correctly what your FINAL choice will be. So if you switch to A+B - that’s what he will have predicted, and you get $1000. If you don’t switch and stay with B, that’s what he will have predicted, and you get the million.

Yes, the causality is fucked, but we’re talking about a basically magical capacity to predict the future here. The Predictor’s choice of how to fill the opaque box is based on information from the future, effectively.

So, instead of an infallible predictor, we now violate the terms of the paradox and have a fallible predictor.

Beyond that, you’ve merely abstracted to a third fellow. Now, the predictor is fallible (which still violates the notion he is a predictor in the first place) and now have an infallible middle man.

So, you would choose A&B in any case. What free will are you complaining about? If you’re always going to switch no matter what, you have no free will anyway - just the illusion of free will.

And if you keep changing the conditions and circumstances of the puzzle, in what way is your solution applicable to the original paradox at all?

I think this is the key difference between the one-boxers and the two-boxers. As far as I can tell the two-boxers think that once the prediction’s been made then future events can’t change it, which would be the case if we were dealing with a mortal predictor, whereas one-boxers think that a hypothetical infallible Predictor as specified in the initial scenario must have such perfect knowledge of the future that it’s functionally equivalent to being able to go back and change his choice, or he wouldn’t be a hypothetical infallible Predictor.

Yeah, more or less I think. The way I see it an infallible predictor is functionally equivalent to reverse causality.

If he can go back and change the contents of the box in response to the recipient’s choice, he isn’t making a prediction at all – the situation is equivalent to a mechanical device that lets you enter your choice via switch and either 1)removes the $1000 transparent box and drops $1000000 into the (originally empty) opaque box or 2)leaves both boxes for you to take as they are. In that case, selecting option 1 is obviously preferred, and does not evoke any sort of logical convolutions.

As far as I understand, the reason the Predictor is a Predictor as opposed to a predictor is that he can know the chooser’s choice with perfect accuracy ahead of time and hence decide what to put in the boxes based on something that hasn’t happened yet. I can’t see how that isn’t functionally equivalent to time travel or your device. The existence of such a Predictor is inconsistent with either linear causality or the chooser’s free will, but I, and I presume the other one-boxers, are interpreting the situation as “assume a perfect Predictor, and whatever deviations from reality as are necessary to make a perfect Predictor possible”

If you’re believing that causality violation is involved, then yes, you should clearly pick box B, because changing to A & B would change the predictors’ prediction. However, my interpretation of the question is that the predictor is exceptionally good at predicting, but without being influenced by future events.

He is nearly infallible, but if I still have free will (which I do), then I still have the power to change to the opposite answer. If we believe he is 100% infallible, then either we’ve denied that I have free will (which you claim we haven’t), or we assume my decision can cause a change in his prediction, even though the prediction happened first. If it’s the latter (i.e., if you’re assuming causality violation), then I agree that you should stick with B. But that’s not my understanding of the question.

If you’re not assuming causality violation, then I don’t see how the existence of someone who tells me the answer affects the predictors’ fallibility. If he knows all, then he knew someone would tell me the answer. Which makes no difference anyway, because regardless of what they told me, I should switch to A & B. That’s my point. It’s not knowing what he chose that matters, it’s knowing he’s locked into a choice. Once I know he’s locked into a choice, then regardless of what he chose, I should switch.

As far as I can tell, the 1 boxers are assuming he’s never really locked into a choice, either because they believe future events can influence the past (and if that’s the case, I agree with them), or because they’re mislead by the claim that he’s nearly infallible into believing his choice is never locked in.

Like I said, I deny that my addition of “someone who tells you what he chose” (lets call him the revealer) makes the predictor any more fallible. But sure, it’s an addition. My point, though, is to illustrate that regardless of what the revealer reveals, you should switch to A & B. Which proves you should have switched anyway, even if there were no revealer. That’s how it’s applicable to the original paradox.

My assumptions (about the original paradox) are:
(1) At some point you know that the predictor’s prediction has been made.
(2) The predictor can’t violate causality. Nothing you do after the prediction can cause the prediction to change.
(3) You have free will at all times. The fact that the predictor made a prediction does not mean it is no longer possible for you to choose the other choice.

Another two-boxer here. I can’t explain it better than tim314’s posts. The question boils down to "Would you like $X or $X+1000? I can’t tell you what X is but I can tell you that X is a fixed amount, and cannot now be changed." To such a question the only appropriate response is that you would like the X+1000

It seems to me that the reason 2 boxers are 2 boxers is because they are rejecting the premise of a virtually infallible predictor, and think it can be fooled by changing your mind after making a 1 box decision.

(4) The predictor predicted that you would feint one way and then switch to two boxes.

Do you think that my three assumptions above are incompatible with a virtually infallible predictor? Here they are again:

I’m not saying that I can beat the predictor. He may have anticipated what I would do, and chosen A & B. If he did, I should choose A & B and take the $1000.

He may also have anticipated that you were going to choose B, and chosen B. If he did, once his prediction is locked in, you should switch to A & B and take the $1,001,000.

Either way, once the prediction is locked in, you should choose A & B. Without denying my assumptions above, how do you get around this?

I’m not denying this possibility. Regardless of whether this is the case, once his prediction is made, I should switch to A & B. $1000 is still better than nothing.

Of course before his prediction is made I should steal myself to choose B, swear up and down that I intend to choose B, etc., etc. But after the prediction is made, going with A & B is always better.

Arguably the still better solution would be to hire someone to kill me if I pick A & B. Thus, guaranteeing I will pick only B. The predictor would be smart enough to know this, and thus would predict B. But this requires me to act before the prediction is made, which is cheating as I understand the question.

After the prediction is made, I can’t do anything to affect whether I get the $1,000,000. I can only control whether I get the $1,000.

Not at all. Even if the predictor is infallible, the logic remains: 1)X+1000 is greater than X and 2)X cannot be changed once the prediction is made. This is a solid reason to take both boxes. I would expect the predictor to predict that I would reason this way and leave the other box empty.

Because he’s a perfect predictor. When he made the choice doesn’t matter. He knows the box(es) that you will take at the moment of you taking them. You could flip between B and A&B 500 times. He won’t care about any of them but the final choice.

Look, you want a cool million. The predictor wants you to have a cool million. If we all agree that it’s the logical thing to do, don’t ruin it for the rest of us by leaving the possibility open that you’ll reach for that paltry additional $1000 (paltry by the standards of someone who just walked away with $1 million). It just confuses us and brings in that element of doubt for the predictor. Here, have a million dollars and that’s that.

More seriously, why do you think it’s impossible for the predictor to know that you’ll feint one way and then go another? The predictor must (somehow, magically maybe) know your personality and how you will behave. In fact, it’s so good at that that it’s right 100%-epsilon of the time. So, people who think it will maximize their earnings by thinking they will choose B, but secretly or subconsciously plan on choosing A+B, will get $1000. But, the rest of us are happy with the $1 million, the predictor knows we’re happy enough with that and won’t gamble it all harboring plans to switch, and we get our $1 million.

I just realized, this is the St. Crispin’s Day paradox, just wrapped in a different form.

The apparent syllogism runs
Either A or B.
If A, then C.
If B, then C.
Therefore, C.
In the St. Crispin’s case, A is “We will win the battle”, B is “We will lose the battle”, and C is “It is better for us to be few than for us to be many”. In the boxes case, A is “The Predictor put a megabuck in the hidden box”, B is “The Predictor left the hidden box empty”, and C is “It is better to choose both boxes than to choose only the hidden box”.

But in both cases, despite the syllogism appearing to be valid, the conclusion is false: It really would have been better for Henry to have more men, and it really is better to take just box B. This is because, in both cases, the decision you’re seeking insight on will influence which of A or B will be true: Henry having more men would make it more likely for him to win the battle, and you picking to open only the hidden box will make it more likely for the hidden box to contain the million.

To be clear, I don’t think it’s impossible for the predictor to know this. He may very well know that I am going to “change my mind”.

The point is, once the prediction is made, there is absolutely nothing I can do to change whether I get the $1,000,000. (Subject to my assumptions about causality and such that I listed above). Switching to A & B won’t cost me $1,000,000 at that point. Sticking with B won’t win me $1,000,000 at that point. The prediction has been made, and the $1,000,000 is under the box or its not.

At that point, after the prediction is made, the only thing I can effect is whether I get an extra $1000 or not.

If someone says “I have placed $1000 in box A. I may have placed $1,000,000 in box B, based on how likely I think you are to not also look in box A,” it doesn’t matter at that point how good a predictor they are of my behavior. The money is there or it’s not. At that point, I lose nothing by looking in both boxes.

Unless you believe my actions can change the past.