Are you a One-Boxer or a Two-Boxer?

Do you think that after the boxes have been filled, your decision can still affect the contents of the boxes? If so, you reject the premise that the future can’t influence the past.

I maintain that after the boxes have been filled, it is always better to take A & B, since at that point it can no longer affect whether you get the $1,000,000.

Certainly, it is better prior to the boxes being filled to be the sort of person who will fail to realize that A & B is better. But after the boxes are filled, that ceases to be true.

To restate it:

The fact that the predictor is infallible means that in the cases where he put $1,000,000 in box B, the chooser won’t pick A & B.

It does not mean that in the cases where he put $1,000,000 in box B, the chooser should not pick A & B. In fact, everyone should pick A & B once the contents of box B is set.

1-boxers don’t recognize that there is a point where the contents of box B is truly set, and yet there is still a choice to be made. Because of the “infallible predictor” we have been told what choice will be made at that point. But that’s not the same as what choice should be made at that point.

The folks with $1,000,000 under box B should take A & B at that point. They won’t, but they should.

You keep typing that but as far as I can tell it’s irrelevant. Yes, the people who have a million bucks should take both boxes to maximize their winnings but it just so happens that everyone who takes both boxes turns out not to be one of the people who have a million bucks in box B.

The way I see it, you have two choices, either pick B or pick both. Forget about the predictor and when the money gets put in and all that. Statistically, if you pick B you will get very nearly $1,000,000, and if you pick both you will get very nearly $1,000. You can’t change the odds without invalidating the premise of the problem. So functionally it doesn’t matter when the money is put in and when the decision is made. There is almost no chance where picking both will give you more money than picking B alone, so picking B is the logical choice.

The problem with this reasoning is that the predictor has irrevocably made his move, putting either $1000000 or $0 in Box A. In either case, taking both boxes gets you $1000 more than taking only Box A.

But there is no such time: If there were, you’d expect about half the 2-boxers to leave with a lot more money. If a few thousand experiments fail to produce any instances where someone obtains both the million and the thousand, it’s strong evidence that the predictor is either cheating or inverting causality.

You can disagree, but that leads to a logical contradiction: for taking the two boxes to be a better answer, the predictor has to be wrong. The predictor not being wrong is an axiom here.

As written, the contents of the box are always dependent on an event that hasn’t happened yet. Which means your “after the contents of the boxes have been set” time never exists. And if you choose A&B at the end, you lose.

If that’s the case, than either causality is reversed, and you should take the B box, or the whole situation is impossible (the case in our world), in which choice can’t come up at all. Again: if we take the premise as written, the two box solution is always the wrong choice. The fact that this violates intuition (and real-world logic) is that it’s a scenario dependent on circumstances that can’t arise. You’re being asked to take an impossibility (the magic predictor) as a given.

Take the whole one box/two box thing out of it, and consider the closest case that actually exists: You’re given a choice between two options: a few thousand people have done it before you. Everyone who took option “Q” has won much more than those who take option “W”. Do you take Q or W?

Exactly. The people who have a million bucks in box B should take both boxes to maximize their winnings. But the predictor has only put people in that category who are extremely unlikely to do this. (The predictor has the power to sort everyone into either “extremely unlikely to take both boxes”, or “extremely likely to take both boxes”.)

The fact that they don’t defy the predictor and pick both boxes doesn’t cause them to get the $1,000,000. That would be backwards-in-time causality. Once the contents of the boxes are set, they would get the $1,000,000 no matter what they do. So they should take both boxes, but they won’t, because the predictor chose them well.

What they will do, based on the predictor’s infallibility, is different than what they should do.

The 1-boxers are thinking they’re getting the million bucks because they picked one box. Really they got the million bucks because they were predisposed to pick one box, but once the boxes were set and the million was guaranteed, they should have picked both boxes. They won’t, but they should have.

No, because the predictor is super good at predicting what people will do. At least as I interpret the scenario, the predictor is sufficiently omniscient and the world is sufficiently deterministic that the predictor has only a teeny tiny bit of uncertainty about what anyone will do. So it splits the the whole world into two categories, “super likely to take both boxes”, or “super likely to take one box”. At that point, it fills each persons boxes.

Once we get to that point, it is in every person’s best interest to take both boxes (regardless of which of the two categories they’re in). But the people in the one-box category virtually always end up taking one box, and the people in the two-box category virtually always end up taking two boxes.

You get the $1,000,000 or not based on the what the predictor determines that you’re predisposed to do, but your choice itself only determines whether you get the $1,000. But the people who were predisposed to make the wrong choice (passing on the $1,000 when there’s no harm in taking it) were pre-sorted into the category where they’re rewarded with $1,000,000. So it looks like their choice was the right one – until you realize that the choice itself couldn’t affect whether they got the $1,000,000.

At least, that’s how I see it, assuming both the predictor’s accuracy and the fact that the future can’t influence the past. If we reject that second assumption, then that absolutely changes the answer.

Heh heh heh.

After having really thought about this, I am a one boxer. But I admit that the logic of the two boxer is essentially unassailable.

I cannot get over the fact that, in retrospect, anyone who chose box B and walked away with a million dollars SHOULD have chosen both boxes for that extra thousand. But of course, anyone who chooses both boxes will only ever walk away with $1000.

Empirically, the best choice is to just pick box B. I even think theoretically this is the best choice. But the logic behind picking both A&B is charming. Ultimately you’re never ever going to walk away with $1,001,000 or $0 though, those are not actual options.

So, I’m a one boxer.

My claim is that a predictor that omniscient is exactly equivalent to reverse causality.

Or, if you prefer:
At some point in the game, you have to make an absolute, irrevocable choice (probably when you actually take the box(es)). At that point, the predictor makes his prediction and loads the boxes appropriately.

In this case it’d be obvious that you take only box B, but only because you understand the source of the predictor’s omniscience (he’s loading after the fact). My argument is that given a sufficiently omniscient predictor, these two games (loading before but with perfect prediction of the end result, and loading after with actual knowledge of it) are identical.

Let me get this straight. The goal of the game is to get more money. All of the people who chose the ‘wrong’ decision, according to you, walk away with more money than the people who make the ‘right’ decision, according to you.

Shouldn’t that indicate that you’re wrong about something?

Let me propose a slightly alternate scenario for tim314 and the other two-boxers.

It’s the scenario in the OP, basically - game show, choice of one box or both, you’re told that there’s an essentially infallible predictor, etc. It’s been done ten thousand times and the predictor is never wrong.

Here’s the twist. As a rational being in a universe built on presumably linear time, you just can’t accept that there’s a faultless predictor of the future. So you figure out the only other possible explanation for that perfect record : the hosts are cheating. If you pick B, you find the million that’s sitting there. If you pick both, some trigger is fired and the million dollars vanishes in the box thanks to a hidden panel.

So naturally, you’d choose B, right?

I think, based on a follow-up skim of the wikipedia article, that Tim’s conception of the nature of the Predictor - not being literally able to see the future, as that would cause reverse causality, but close enough to always be right - is the one that Newcomb had in mind, and the one that makes the scenario paradoxical. It looks like a fragile paradox, as if you assume too much strength for the Predictor’s superhuman Predictions or allow economically rational people to force themselves into picking the one box (Legal contracts stating that if they pick two boxes all their money will be donated to the Retirement Home for Sadistic Thought Experiment Characters), it collapses.

Perhaps, but I don’t think it’s significant. That is, even if we allow him to add this constraint, it doesn’t affect the results of the analysis on what the best strategy is.

Bingo! Me too.

No, they chose the correct strategy, which is to not be greedy and stick to their guns, based on the definition of the game.

Based on what definition of the word “should”? Because anyone who’s the type of person who’d do that walks out with $1K.

Right!

Wrong! You keep using that word, “should”. I do not think you know what it means!

OK, let’s fix that. Lets’s say the predictor is only 90% correct: he’s just a really good guesser at whether you’re the cocky type who thinks he can outsmart the game, versus the conservative type who is happy with the cool $1M and doesn’t care about the additional $1K.

Let’s say the past record shows very clearly that he’s right 90% of the time, with no correlation between correctness and which answer was predicted (that is, the 90% correct isn’t because he always guesses B and 90% of guessers choose B).

What’s the best strategy for this game?

Tim’s argument still applies. But I’m still going to pick B, and I disagree with him that this isn’t what I should do.

This sidesteps the causality issue, but doesn’t really affect the results. If you’re not happy with 90%, then try 60%. My response is still the same: I’m not going to try to fool the guy into thinking I’m a one-boxer when I’m actually a two-boxer.

No, that’s what’s fooling people. After the boxes are filled, your choice only affects whether you get an extra $1000. All the people who got $1,000,000 messed up this choice and missed out on the extra $1000.

The predictor preemptively rewards the people who he thinks are going to mess up and miss the $1000. But messing up and missing the $1000 isn’t what caused you to get that reward. (That would be time-reversed causality.) So you can’t credit that choice with getting you the $1,000,000.

If you think “I got a $1,000,000 because I didn’t take both boxes”, you’re thinking about it wrong. You could have taken both boxes after the $1,000,000 was placed in box B, at no cost to you. The predictor just knew you wouldn’t. Being the sort of person who would pass up on $1,000 got you a million. Passing up on the $1000 got you nothing.

To phrase it another way: Unless the predictor makes a mistake, no one who has the $1,000,000 under box B will take both boxes. But everyone who has the $1,000,000 under box B still should take both boxes.

The normal definition of should. I’m talking about what they should do once the boxes have been filled.

The distinction between what you “should” do and what you “will” do is critical here.

The predictor has effectively given a $1,001,000 to the people it knows will leave $1000 of it on the table. That doesn’t mean that, at that point, they still have any reason to leave $1000 on the table. At that point, once the boxes are filled, they have no reason to do that, because the predictor can’t take the money back. They should take all $1,001,000, but they will leave $1000 behind.

I want to reiterate this point: I do not think that anyone will outsmart the game and get $1,001,000. I am acknowledging that everyone will get either $1000 or $1,000,000 (assuming the predictor doesn’t make any mistakes).

But everyone who got $1,000,000 had the opportunity to get $1000 more. They were just very carefully preselected to be people who wouldn’t take advantage of that opportunity.

The predictor only gave the opportunity to get $1,001,000 to the people he knew would pass up that opportunity. That doesn’t mean their choice was right. Passing up on $1,000 when the $1,000,000 was already in the box gained them nothing.

I disagree with you: their choice was right.

Their choice was to be the kind of person who gets $1M, rather than to be the kind of person who gets $1K.

Or, put differently but same meaning: their choice was to use the strategy that nets them $1M rather than nets them $1K.

Seriously, what would you do?

The problem I have with the two-box answer is that it gives sub-optimal performance and only makes sense if you know the rules of the game.

Let’s say some Afghani woman tunes into the game show and doesn’t understand them explaining the rules (because the show is in English). She would observe that every time somebody picks one box, they walk out with a million bucks and two boxes they walk out with a thousand.

If she found herself on the game show (still not understanding the rules), she’d clearly pick the one box.

It’s only once you’re told the rules of the game that two-boxing becomes a reasonable sounding option – and despite sounding reasonable it still decreases your winnings even if the “the boxes are already set” logic is really appealing.

It’s an odd case where knowing the rules can lead to worse performance than simply observing the trials.