Are you a One-Boxer or a Two-Boxer?

. . . This isn’t a porn thread, is it? :frowning:

Hang on –

I was merely clarifying that my decision would depend on the nature of the Predictor. The method of its predicting is extremely relevant. In fact, the scenario states that I am convinced of the Predictor’s ability. That implies that I have some knowledge about how the Predictor operates, because otherwise I would not be convinced. If the Predictor’s miraculous abilities are dumb luck, then I’m a two-boxer, because future performance is independent of previous performance. If they are the result of a brain-scan, I’m a one-boxer, because I don’t think I can truly intend to do an action, but willingly do its opposite, without an external agent changing the terms of the situation. (That is, determinism is not a prescription for what we should do, but rather a description of what we do do.) Wizardry: logic goes out the window.

Perhaps the existence of such a Predictor is not merely impossible, but in fact illogical, due to the combination of quantum indeterminacy and the butterfly effect. In such a world, you might as well ask me how I would go about drawing round squares while forbidding me the option to deny their existence, because you’ve given me contradictory assumptions.

Quantum indeterminacy and the butterfly effect aren’t necessary. In a world without these, you still couldn’t draw a round circle, and the predictor still has the same issues.

Um, by which I mean “couldn’t draw a round square”. Sorry.

Suppose there exists a shape called a roundsquare, which is both perfectly round and perfectly square. For the sake of the hypothetical, you can’t deny that this shape exists. To do so would be to deny the hypothetical, and that’s not allowed. Give me detailed instructions on how to draw a round square.

There’s no “correct” or logical way to approach the above scenario, because the assumptions are contradictory. Likewise for the perfect Predictor. A perfect Predictor needs an input in order to make its prediction. That input is some combination of the state of my brain and the state of my surroundings. Call this combined state S. Which box I choose also depends on S. Because both the prediction and my choice depend on S, they are highly correlated for some time after the Predictor makes the prediction. However, because S can only be known to finite precision, the prediction and my choice will become less and less correlated as time progresses. In other words, the accuracy of the prediction decreases as time goes on. How quickly does the prediction lose accuracy? It doesn’t matter. We’re constraining the prediction to an accuracy of “almost certain”, and so the prediction loses this level of accuracy after an infinitesimal amount of time. The perfect Predictor is not merely improbable, but impossible. The assumptions of the hypothetical are contradictory, and so logic doesn’t work.

Yes, but if that’s your conclusion, the two-boxers are still illogical. :wink:

I disagree with the analogy. This hypo doesn’t require you to explain how the Predictor does it. All you must do is assume that he is the Perfect Predictor.

The prediction and your choice is one and the same. There is no “finite precision.” The precision is infinite (or very close to it). The accuracy of the prediction never changes; it is perfect.

How is it contradictory? I understand that such a thing is impossible in our universe, but it isn’t contradictory: The guy knows what you will end up choosing. The

Predictor is not choosing what you will choose when he finally sets the boxes, he is predicting your final choice. So you may be thinking, “I’ll choose only B” and the Predictor puts $0 in the box. You, thinking that the predictor is reading your mind, think you have a million bucks, so you pull the old switcheroo and choose both boxes and are shocked to find the million dollars not there.

He saw you coming. That’s what makes him the perfect predictor.

What’s so hard about drawing a round square? Pick four points on the Equator, at longitudes 0 degrees, ±90 degrees, and 180 degrees. Draw geodesics between adjacent points. Presto, a perfect square that’s also a perfect circle.

And likewise, what’s so hard about getting a success rate that’s greater than 50%? I could do that: I’ll just search the subject’s internet history to see if they’ve ever taken part in a discussion like this one, and if they haven’t, I’ll flip a coin.

I would totally watch this if it were a reality TV show.

At least if they guaranteed a two-boxer once per episode, so I could mock them on the internet the next day.

I flip a coin: heads I pick one box and tails I pick two. Let’s see the Perfect Predictor guess that.

Unfortunately for you, the predictor obviously saw this coming, and he doesn’t like cheaters, as per the OP:

Uh-oh, we’re at the giant font stage…
Anyway, picking B shows trust. Picking A+B shows greed.

This is the heart of the matter. At first reading, I didn’t understand the problem. Then I had to read some answers and reread the problem again - “you know the rules” and “there is a Predictor”.

The two-boxer strategy argues that, regardless of what prediction the Predictor has made, taking both boxes yields more money. If you believe there is no Predictor, then two-boxing is the best strategy. The one-boxer strategy argues that taking B is correct, otherwise the Predictor has made an incorrect prediction.

This is at heart that same as the argument of people who have faith in God and those that do not believe there is a God.

Imagine walking into the game show with everything the same, except that this “nearly perfect Predictor” is someone everyone talks about, and some have even said they’ve experienced, but for whom you’ve had no concrete evidence. You walk on stage and choose, ultimately, on whether you believe the Predictor exists. There’s only one right answer and it hinges solely on that belief.

[sub]I’m an Agnostic, which makes me a procrastinator, since I can’t make up my mind which way I believe, and can see it both ways ;)[/sub]

This is the part I don’t get.

If the Predictor really isn’t any better than a coin flip, then mathematically, the results are pretty similar whether or not you two box. So, why argue against one-boxing? It’s like arguing that “heads” is better than “tails”. It doesn’t make sense.

If you accept such a Predictor can exist, one-boxing is a clear mathematical winner.

Two-boxing is only mathematically a winning strategy if (a) such a Predictor exists and (b) you can somehow outwit the Predictor.

But that argument makes even less sense to me. It relies on accepting an unlikely premise in the first place (the existence of the Predictor) and then adding the further absurdity of tricking this magical Predictor.

I can see the argument that the Predictor can’t exist in the real world, so either strategy is equally valid. That’s a valid conclusion. But given that premise, I can’t see how anybody can justify the notion that one strategy should be employed over the other, as our two-box proponents do.

I’ve tried with all of my might to understand why anyone would be a 2-boxer and I still don’t get it.

I guess I get the whole “if I pull a switcheroo at the last minute, there is some probability that I’ll get two boxes with money inside” scenario. But no matter how you slice it, that necessitates a Predictor who is capable of being outfoxed. This doesn’t make any sense given the hypothetical’s premise.

As long as its possible to verify how previous trials have played out, this is false. Faith has nothing to do with it.

For those who still don’t see why double-boxing makes sense:

Let’s say that the network, in order to improve the ratings a bit, decide to shake up their formula. Up to now, Box B has been covered with a tablecloth when the contestant is brought on. From now on, the show decides to dispense with that. Both boxes are now transparent from the get-go.

You’re on the show and walk on stage. Two scenarios are possible:

  1. You see box A with $1000, and an empty box B. In this scenario, double-boxing clearly makes sense.

  2. You see box A with $1000, and box B with a cool million. Would you still single-box in this scenario? The money isn’t going to magically disappear if you just grab both boxes.

The thing is, these are also the only two possible situations you can be in when box B is covered. In both cases double-boxing is the best strategy. Why should the tablecloth make a difference?

(If you still go ahead and single-box in the above scenario 2, would you at least not feel a bit like an idiot while doing it? I sure would. And I’m still a single-boxer.)

And please, people, don’t turn this into a religious debate. That’s for the thread on Pascal’s Wager.

And that’s the crux of the matter. As written, yes, the money is going to magically disappear if you just grab both boxes. For it not to, the predictor would have to be wrong for virtually every player, which is a contradiction of the specified conditions.

Yes, this is reverse causality, and it’s why so many people keep coming back to that. No matter how you word it, for the predictor to be real, you have to either have reverse causality or something that’s 100% equivalent to it.

Maybe more clearly: reverse causality is forced because the 2-boxer logic doesn’t work, empirically.

This whole argument is like arguing about whether a unicorn can be both all green and all red at the same time. You have to presuppose an impossibility to even start arguing.

Hmm… I’m unconvinced. At such a fine level, the universe is stochastic and therefore unpredictable. The price for demanding perfection is delving down to such a resolution.

It’s contradictory because

  1. The universe is unpredictable.
  2. An entity that can predict the universe exists.

Three responses:

  1. I’m not “thinking that the predictor is reading [my] mind”. Maybe the predictor read (past tense) my mind, but continuous mind reading is not a prediction. Similarly, watching the movement of the stock market in real time on the trading room floor is not predicting the market.

  2. People keep bringing up this switcheroo stuff. My concerns are still valid, even if I play the game with no intentions to pull a fast one.

  3. As an aside, if I two-box and discover the opaque box is empty, I’m going to think the Predictor is an asshole. He tried to get me to one-box, and I would’ve walked away with nothing!

In this modified scenario, one of two things would happen:

  1. Box B would always be empty because no human could pass up the temptation to take both boxes, the Predictor see this, and always leaves B empty. OR

  2. One day someone figures out the game and only picks Box B even if he sees both boxes filled and wins the money.

Aha! No free will and reverse causality! you say. No, you are free to pick both boxes, it’s just that if you were actually going to, the money wouldn’t be there.

But (under the rules we must assume) nobody can ever walk away with two boxes full of cash or else the Predictor is not perfect: something the hypo tells us is not so.

I think the example with God is a good one. If we have a hypo where you must assume the existence of God, it is not acceptable for an atheist to posit that God isn’t logical, he can be outsmarted, etc. (Same thing if the hypo said assume no God exists. A Christian can’t start quoting the Bible). The hypo sets the rules.