Are you a One-Boxer or a Two-Boxer?

The predictor is perfect. He’s not predicting what box you’ll come into the studio thinking of choosing. He’s predicting the box(es) you choose at the critical moment. If you decide, even at the last second to choose both boxes, then the perfect predictor’s prediction will have always been that you’ll choose both boxes.

By overthinking it, you net $1,000 rather than $1,000,000.

If you only take box B one of the following must be true:

  1. The Predictor is really accurate. In which case, s/he knew in advance you would take only box B, and you get $1,000,000.

  2. The Predictor is inaccurate. S/he thought you would take both. In which case, you get $0.

We’ve been told the Predictor is (almost) always accurate …

Sure, but at some point the prediction is made. At that point, if there’s a million under box B, I should take both boxes, just like if there isn’t. If the predictor is perfect, then I will never take both boxes after he put the million under one. But taking both at that point doesn’t hurt me. If he’s perfect, he will only successfully reward those who needlessly turned down $1000. Turning down $1000 doesn’t cause the $1,000,000 to appear at that point.

Consider the equivalent problem without the boxes. The predictor gives $1,001,000 dollars to the people who he thinks will give back $1000. Does giving back $1000 benefit them in any way at that point?

Again, the paradox comes in because in real life a Predictor isn’t really possible.

In this hypothetical, if you are the sort of person who reasons thus, there will never be $1 million in box B. If you are the sort to take only box B, there is always $1 million in box B.

Once the contents of the boxes are set, the fate of the $1,000,000 is already determined. At that point, I am only determining what happens with the $1,000.

See my question to Inner Stickler. What if the money ($1000 or $1,001,000) is in your hands, not under boxes. Would you give back $1000 at that point?

For you, and for anyone who looks like they’d think like you, the million isn’t there. He saw you coming a mile off. He’s got your number. BTW, he just loves guys like you, you make his whole day.

No free will required. As Antibob pointed out above, you’re better off being the kind of person who picks one-box even if he’s not perfect, even if he’s only right most of the time.

In reality, I doubt anyone would be better than a coin toss.

Seriously: if you saw the game played a number of times, and everyone who picked two boxes walked out with $1K, and everyone who picked one box walked out with a cool million, which would you pick? Two boxes? REALLY?

The problem is you’re not buying the premise (that the predictor can spot you and know that you’re just smart enough to pick two boxes but not smart enough to pick one, or stupid enough to pick one.)

That isn’t really the same problem, because it lacks uncertainty. You cannot treat money you don’t know is there the same as money you do know is there.

Only in that I’m the type of guy who rather simply take $1 million than try to game the system and get $1.001 million. If the predictor really knows the contestants, as it must to have that kind of track record, it knows this about me.

bad analogy alert I’m sitting at a railroad crossing, and the gates come down. Anyone who knows me, even if they aren’t perfect predictors, could make a good living betting I’m not going to purposely drive in front of the train, for the slight possibility of getting to my destination one second earlier. I’m just not going to do it – I’ll sit and wait for the train to pass, even if I’m running late.

I still have the free will to cross the tracks, but I’m not going to do it, it’s not worth the risk. Neither will I risk trying to game the predictor and pick both boxes. If the predictor knows this about me, I’ll walk away with my cool $1 million.

I’m going to argue that the uncertainty doesn’t matter, because you do the same thing whether he hands you $1000 or $1,001,000, namely, you keep it all.

The only difference is you believe that nothing you do can make that $1,000,000 disappear once you have it in your hands. Whereas, even once you know the predictor has put whatever money in the boxes that he’s going to put, you still seem to think that it could change based on your subsequent actions.

Again, it’s only a risk up until the contents of the boxes is set. After that, there’s no risk.

I’m not saying anyone will beat the system. I’m saying there was a point where those who got $1,000,000 could have gotten $1,000 more and they needlessly passed it up. Maybe the predictor is perfect at knowing who will do this – that doesn’t mean it was the right decision.

By your logic, you will never have a million dollars in box B.

You’re wrong that I’m rejecting the premise. I am acknowledging the predictor would know what I was up to, and I’d only get $1000. But at that point, after he decided not to give me $1,000,000 but before I made my choice, there was nothing I could do anymore to get that $1,000,000.

Likewise, after he decided to give you $1,000,000, there was nothing you could do anymore to lose it. As surely as if you had it in your hands. So you should have taken the $1000 too, but you didn’t.

So then I should definitely take box A as well.

If you do have $1,000,000 in box B, at that point you should also take box A as well.

Consider the point at which the contents of the boxes have been set.
Consider a person who, at that point, has $1,000,000 in box B.
At that point, what should he do? Not what will he do, what should he do?

If the predictor is perfect, he will take only box B. But he should take both boxes.

And this is what makes this problem interesting. I claim there is no such point, and that you’ve added it to the original problem.

It is now season thirteen of the inexplicably long running “Give me some Money” game show. Because the game is quick, roughly 10,000 people have played the game, inexplicably choosing one box/two box about half the time.

In all of those games, people who chose two boxes have left with $1000, and those who chose one box have left with $1M. There have been no exceptions.

This is the state that’s proposed by the original question. What should your choice be? I’d take just B in a heartbeat.

First off, assuming no trickery, this situation cannot exist under the current laws of our universe: The “two-boxer” logic is sound, given that there exists some time after the boxes have been set that allows a “switch.” But it’s never happening, so one of our assumptions is wrong.

The easiest one is (time) directional causality. The amount placed in the box is caused by the final choice; i.e. the “effect” precedes the “cause.” This is the condition that the OP was describing as “Magic or whatever.” If we’re going to allow the problem as stated (i.e. that the predictor DOES always get it right, or so close to always as to make no difference over thousands of trials), then sticking to the “but A + B is always better than just B” logic is just going to cost you money. You’re using a definition of causality that doesn’t apply to the problem. The one-box answer is correct.

(Incidentally, this has nothing to do with free will. You can still choose to take $1000 or a $1M as you like.)

Alternatively, you may be overly attached to the laws of your universe. In which case the answer is: “no such scenario can be built in our universe,” which is an equally valid answer.

The two-box answer is never correct, because the conditions that allow it could never have arisen given the problem as stated.

The premises of the problem specifically state that causality and free will are not violated. You are choosing 2 boxes because you reject those premises. Ergo, what I said before about 2 boxers rejecting the premises.

I am not rejecting either of those premises. In the post you quoted, I was saying that what RitterSport seemed to be saying in defense of 1 box appeared to me to be rejecting those premises.

TimeWinder, if we reject causality, then I agree with the one box solution. But I’m not convinced we need to reject causality.

If there is a point at which the contents of the boxes have been set, and nevertheless the chooser still has free will to choose either B or A & B (this is how I interpret the problem), then at that point the right answer for every player is always to choose A & B.

However, if the predictor hasn’t made any mistake, the players who had $1,000,000 in box B will always choose B alone. This is the wrong decision, because at that point they could have had $1,001,000. But they will inevitably fail to realize this.

The predictor is effectively determining who will make the wrong decision about the $1000, and rewarding them in advance with $1,000,000. The people, like me, who realize that there is no harm in taking the $1000 once the boxes are filled, walk away with just $1000. The people who fail to realize that there is no harm in taking the $1,000 walk away with $1,000,000, and so they are convinced they made the right decision. But every single one of them was guaranteed $1,000,000 at the point they made the decision, and had an option for a risk free $1,001,000. They made the wrong decision.

To put it concisely, once the boxes are filled, you are only deciding if you want an extra $1000 on top of what is in box B. There is zero harm at that point in taking both boxes.

But the predictor has preemptively given $1,000,000 to the people who he thinks will make the wrong decision, and pass up on the extra $1000.

So everyone should take the extra thousand when it gets to that point, but the ones who have already been slated for $1,000,000 won’t do it.

No, RitterSport was accepting the premises. So am I. I accept that because of causality or free will violation, such a setup cannot be done in our universe. The premise of the problem is ‘ignore those, some magic makes it possible, despite the fact that it can’t happen in our universe.’ Fine, where does that lead? One box only gives the greater return.