Are you a One-Boxer or a Two-Boxer?

These two, now that I’m done leg-pulling. The Predictor has been talked up as much better than 1000:1 against - it’s been advertised as better than 1000:1 on - and in those terms I might as well punt $1000 I’ve never had against the chance of $1,000,000. There is, as presented, no realistic chance I will lose, still less that I can outwit the Predictor.

Let’s say he’s 90% accurate. Are you really willing to bet a million dollars for a thousand dollar payoff when the odds are 10-1 against you?

It seems to me that this is a thought experiment about what it means to have an infallible predictor. If instead of the predictor you have Joe Schmo who flips a coin or uses human intuition, then your decision after the prediction’s been made has no effect on what money goes in the boxes and double-boxing is the optimal choice. Having a perfectly infallible predictor seems to imply to me that your decision can affect the past descision as to what money to put in the boxes, hence making single boxing the optimal choice. The idea mentioned elsewhere that the predictor is somehow near infallible without invalidating the idea that the future cannot affect the past makes my head hurt. Perhaps considering how fallible the predictor would need to be for double-boxing to make sense would be more useful in understanding the causality chain we’ve created by assuming a Predictor?

Naively, if he’s 90% accurate I’d say that
Single box = 90% chance of 1M, 10% chance of zip - average payoff 900k
double box = 90% chance of 1k, 10% chance of 1.1M - average payoff 110.9k, but I’ve got a feeling I’m missing a step.

For a problem with a finite state space like this one there is no difference between almost sure and sure, just like there is no difference between 0.999… and 1.0. The place where almost sure becomes important is that an infinite union of events with probability 0 can have a positive probability.

As I see it the only arguments in favor of the two box solution would have to be that such perfect prediction is not compatible with a universe where I have free will to make a choice independent of the prediction. But this fights this hypothetical.

I don’t understand how this is controversial. What’s the two-boxer argument?

I don’t understand how this is controversial, either. If you take both boxes, you get $1000 more than if you take just the one box. It’s too late for your decision to take both boxes to change the contents of the opaque box, so you can’t possibly lose anything by doing so.

Of course. If the odds are 10-1 against there being a million dollars in the other box, that’s all the more reason to take the sure $1000, since the decision to do so doesn’t change those odds (the boxes are already set up).

My favorite punch pattern. First a right hook, then a left jab. Others think it should be right jab, left hook. Controversy ensues!

You’re not missing a step, but your numbers are a bit off.

It’s a 10% chance of $1.001M, not $1.1M. So, the average payoff is: $101000 (i.e. you always get $1000 and you get $1M 10% of the time).

Let the prize amounts stand. The predictor is correct with probability p.

Then here’s the analysis:

Single box = p chance of $1M, (1-p) chance of 0. Expected value = p*1000000
Double Box = p chance of 1k, (1-p) chance of 1001000. Expected pay out is (1-p)*1001000.

When are these two the same?

    p*1000000 = (1-p)*1001000

–> p = 50.02498751%

If the discrepancy is larger, e.g. $1 Billion vs $1000, the value p for the break-even point gets asymptotically closer to 50%, which only makes sense.

As for the broader problem, just stating that we should accept the predictor is “almost surely infallible” defeats the purpose of the paradox, which has been noted by other posters. Just how the predictor is so good is

But given these particular prize numbers, the predictor doesn’t even have to be particularly good. Just slightly better than a coin flip. If so, single boxing makes sense.

But if the prize amounts are relatively closer together, how much does it even matter? Instead of $1M, say the amount you can win is $10. Then, the predictor has to be right more than 99% of the time to make single boxing make sense.

But at that point, you are taking a risk on at least a guaranteed $1000 for $10 more. Does that even make sense? Even if the other box contained either $1000 or nothing, the predictor only has to be right 2/3 of the time for single boxing to make sense.

So, the key again becomes just how the predictor is right. If the mechanism for prediction is fallible, it has to be fallible in such a way that it is worse than a coin flip.

But that’s good, in a way. If the prediction is worse than a coin flip, the predictor can eventually figure this out and just switch to the opposite result from the prediction, hence pushing up the prediction percentage back over 50%.

Or maybe there’s some kind of feedback that prevents this sort of thing. Then, you’re back to a coin flip situation, and in the absolute worst case, it doesn’t matter which strategy you use. So single box in that case, too.

I pick A & B, because regardless of what the predictor has chosen, picking A & B results in me making more money. Even if you were planning to pick B, after the predictor makes his choice, you’re better off switching to A & B. The predictor can’t go back and change his choice.

One might argue that you were better off being the sort of person who would never pick A & B, because then the predictor would make a prediction of B. Essentially, the predictor is rewarding people who are predisposed to not make the rational decision. But even if you are such a person, you’d still be better off doing the rational thing after the predictor made his choice.

That said, if this were real, you’d be better off claiming on internet message boards that you’d pick B. Because the predictor could be reading them!

That depends on what the prediction is. If the predictor always expects people to take both boxes and therefore always leaves the opaque box empty, and is right often enough to satisfy the payoff criterion (because enough people agree with the logic “it’s too late for him to change the box contents, therefore $1000 plus X is better than just X”), it’s better to take both boxes.

You’re better off picking just B. A rational person ought to take a near-guaranteed million over a near-guaranteed thousand. Anyone who thinks otherwise is just rejecting the premise that the Predictor is very good at predictions, but there’s really no reason to reject that premise. Free will is in no way incompatible with predictability.

Can I see the math? Your argument appears to be not that it’s better but that it’s no worse.

If so, I’ll agree with that under the assumption the predictor is fallible.

But if the predictor is right just often enough, it devolves into a game where you only ever win $1000 by double boxing.

If the predictor is truly omniscient (or even merely distinguishable from flipping a coin), you win more, on average, by single boxing.

So again, the paradox comes down do how the predictor works. Can knowledge about knowledge about the future change the future (i.e. the prediction)? In the worst case, it comes down to a coin flip and single boxing and double boxing produce probabilistically equivalent returns. In the best case, single boxing is a clear winner.

In no case is double boxing probabilistically going to produce a better return. Even if the predictor is worse than a coin flip, the predictor’s strategy should then be to flip to the opposite result from the prediction (i.e. being consistently wrong on a coin flip is probabilistically equivalent to being consistently right on a coin flip). I can understand the psychological side of it, though. But I try to go by the numbers rather than my messed up monkey intuition.

OK, the predictor just made his prediction (which is still concealed from you). Now I’m giving you the opportunity to change your mind. Tell me why you shouldn’t switch to A & B at this point.

If he picked B, you should switch to A & B.
If he picked A & B, you should switch to A & B.

I’ve just had a thought. If I were charged with filling the boxes with my own money, absent the Predictor’s superhuman powers and inhuman motivations, I’d leave box B empty and try to convince people that I was the Predictor with the powers and motivations described so that people would pick just Box B. If everyone saw through my charade, they’d rationally pick both boxes and the observed results - chooser picks both boxes, box B is empty would be identical to the situation where the Predictor predicts the chooser picking both boxes.

Is the predictor an actual predictor or a mind reader?

If a mind reader, you are correct. If not a mind reader and an actual prognosticator, you are incorrect.

You pick B, the predictor picks A+B, knowing you will switch to A+B. And you win $1000.

You pick B, the predictor picks B, knowing you will stick with B. You win $1000000.

If the predictor is actually any darned good at prediction, you shouldn’t switch.

It seems to me that you’re still talking though about whether or not I should decide to switch in advance.

Suppose the predictor has already picked B. Why shouldn’t I switch to A & B?

Because you can’t switch after you make your choice without the predictor predicting your switch. That is, if the predictor actually can predict what you’re going to do.

If he can’t predict that you’ll first pick B then switch to AB, then what kind of predictor his he?

Then the predictor is lousy at his job. I thought the point was the predictor was nearly infallible.

If the predictor has already picked B and you switch to A&B, the primary premise of the thought experiment is already violated - the predictor isn’t any good at prediction.

The counterpoint is that the predictor IS good at his job and knows (through his mystical prediction powers) you will switch to A&B ahead of time. The predictor picking B at all should not be a possibility, assuming he’s any good at his job.

Regardless of whether or not the predictor is nearly infallible, at some point he will have made a prediction.

We can split this prediction into two cases:

  1. He picked A & B.
  2. He picked B.

I am asking anyone to present an argument why, at that point, in either of those cases, I shouldn’t switch to A & B.

If you are saying that a nearly infallible predictor precludes me from switching to A & B in case 2, then it seems to me that you are denying that I have free will.