Are you a One-Boxer or a Two-Boxer?

Not quite.

As shown now repeatedly through this thread, a perfect predictor isn’t even necessary.

A predictor that’s merely 80% accurate is good enough. In this case, what’s mathematically the best solution on average? Still 1-box.

Ok, says the two-boxer, let’s say we know the predictor is 80% accurate, so we try to fake the predictor out. Well, if the predictor is any good, your reaction should be built into the prediction. Not all the time but 80% of the time.

Ok, says the two-boxer, then how about if we sneak a peak after the predictor has made his choice and then flip choices? Ok, but how does that relate to the original problem? Extra information is extra, i.e. we’re no longer in the original terms of the paradox.

It’s a paradox of psychology or philosophy rather than a mathematical paradox. As a mathematical problem, it’s pretty darned boring and a maximizing solution presents itself pretty easily.

sigh

No two-boxer here has EVER mentioned “faking” the predictor out. That’s not the argument. The argument for two-boxing has never EVER been that they can one-up the predictor. Even if he’s only 80% accurate.

I agree that EMPIRICALLY and even in one train of logic, it is better to be a one-boxer. I am a one boxer for sure!!!

But the two boxer logic is NOT wrong. It does NOT fight the hypothetical. And no matter how many times you say it just ignores the perfect predictor, it doesn’t.

Mr. Shine’s comment about people coming by and looking at the boxes and saying “oh this person is definitely only going to choose B, because there’s a million in there, but they SHOULD choose A&B” is perfect. Now, of course you’re going to come back and say “but if they did pick A&B, there never would have been money in B in the first place” to which I say, yes you’re just repeating the one boxer argument over and over again, and not refuting the two-box argument whatsoever.

I think most of the one-boxers here *do *agree that the two-box argument holds up logically (and it does).

I also think that most of those arguing the two-boxer position would pick one box anyway (because they should).

Think about it this way: If the two-boxer argument was wrong, it would mean that we *did *live in a universe where a perfect predictor was possible. The argument that we don’t only works if you agree that two-boxing make logical sense.

If you can have a perfect predictor, then something has to give: Logic, free will or causality.

If you want to keep all of those, then you can’t have the perfect predictor.

The hypo states that we have a perfect predictor. To come up with “logic” that says otherwise is the very definition of fighting the hypothetical. I contend that the two-boxer logic is mere repetition that the Predictor can somehow be juked and jived at the very end, or that somehow he *should be juked and jived at the final microsecond before the choice is made. The perfect predictor can see if you will try to fool him at the last second.

So, even if you think: “I’m only going to pick one box, so I know that there is a million here, so I *should just grab the other thousand” then it was the chooser who was wrong. The chooser was NOT only going to pick one box (as he thought). He was going to pick two. The Predictor saw his game a mile away. That is why he is the PERFECT PREDICTOR. To say otherwise simply denies the existence of a perfect predictor. The hypo says he is one. We must stay within the constraints of the riddle.

Further, his prediction is one of the chooser’s FINAL and irrevocable choice. Not a prediction of what he will decide 10 seconds before choosing, or an hour, or a second. So this talk of the case being “set” and the prediction being “made” before the final choice is nonsense. The prediction is what the chooser will ultimately choose.

If we “what if”* it to death, then any problem has infinitely many answers.
*If we had a time machine, if the boxes were both transparent, if you were given the cash, etc.

I disagree. It doesn’t hold up. It holds up logically in our universe because no predictor, no matter how good, can read minds. It’s possible to fool a mere mortal.

But the hypo states that we must assume that this guy is infallible, and if he is, then he will see you coming a mile away with the whole “best go ahead and take the extra thousand” thoughts.

The two-boxer argument isn’t that you *will *maximize your winnings by taking both boxer. Empirically, you won’t. It’s just that logically, you should.

That’s just what I’m saying. In a universe where a predictor is possible, logic, as we know it, seems to break.

Why? How does a person KNOW that he has a million in the hidden box? Because the predictor has predicted that he will only choose one box? Why would he think that after deciding to choose two boxes?

[QUOTE=Martian Bigfoot]
That’s just what I’m saying. In a universe where a predictor is possible, logic, as we know it, seems to break.
[/QUOTE]

I don’t think it does. It just makes the outcome known to the chooser based upon his choice: $1k for two boxes and $1M for one box.

I mean, it makes our previously known rules of the universe suspect when a game show host is infallible, but it doesn’t change logic.

Which, again, means that the Predictor’s prediction is effectively based on information from the future (your choice of one or two boxes).

Then causality goes out the window. Fine. You’re in a universe with retrocausality.

The paradox as originally stated, though, specifies that retrocausality isn’t involved, and the Predictor’s prediction isn’t caused by your eventual choice.

In that case, either free will or logic has to go.

If you keep your free will (you can still change your mind until the last second), then logic has to go, because at the last second, the money in the boxes has already been set, and the logical way to maximize your outcome at the last second will be to take both boxes.

(But if you do that, empirically, you will end up with the worse outcome.)

It’s a bit like the project management triangle: “Fast, cheap, good - pick any two.”

We offer: A perfect Predictor, free will, forwards-only causality, and logic that works as expected.

Pick any three. You can’t set up a scenario where you have all four.

I only quoted this because I think it is the relevant point. The prediction isn’t “caused” by my choice.

By the definition of the omniscient predictor, he KNOWS my eventual choice; I didn’t cause him to make the prediction because of my last minute juke and jive; he knew I would do that. No retrocausality is there, nor has my free will been compromised. I am perfectly free to choose both boxes…but the predictor would have known that I would do that and I get my $1k.

The puzzle is only illogical in the sense that there is no such thing as a mind reader, or someone who could do more than a 50/50 job of guessing a last minute switch in our universe. The only problem is, as has been said, denying the omniscience of the Predictor.

If he looked into a crystal ball and saw you make your choice in the future, then yes, that is your action causing his prediction, in the same sense that if I look out the window and see a cat, the cat being there is what causes me to say “there’s a cat”.

If he did a brain scan on you and figured out in advance what you would do, and he is always right, then you have no free will, in any commonly understood sense of free will (maybe you still have the illusion of free will, but your actions are, in fact, determined and can’t be changed).

Denying the denying the omniscience of the Predictor is one of four “solutions” to the problem. But, as I stated before, at least one of four “ingredients” have to go for you make a functioning “prediction sandwhich”.

I’m not sure what it would mean if we were in a universe with a Predictor, free will and forwards-only causality, but where logic didn’t give meaningful answers (that is, a universe where the reasoning employed to maximize payout just fails to maximize payout, and that’s simply the way it goes). I can’t really fully picture living in an illogical universe, or say for sure if there could be such a thing.

Another thing: I think the problem can be seen more clearly by simplifying a couple of points in the original problem. Instead of saying the predictor is “almost always” right, just assume that he is always right (otherwise he’s just a dude that’s decent at making predictions, which is a less interesting problem. I don’t think anyone has denied the existence of stock brokers or weathermen). Also, instead of asking which box you want to pick, make the problem how to maximize your profit. Otherwise, you can say that you don’t need to use logic to pick a box. It’s wanting to get with as much money as possible that really causes a paradox.

Hang on, wait, I do know what it would be like to live in a universe where logic doesn’t work. It would be a universe with miracles and wizards (or gods).

To be clear, in the strict and limited sense that I’m using “logic” above, the statement “I’d pick box B because the Predictor is a wizard” doesn’t count as logic.

Although there’s nothing wrong with that statement otherwise. It’s the very reason I’m personally picking box B and walking out with a million, in such a universe.

Sure it does. In a universe where you are told (and must assume for the hypo) that there are wizards (insofar as wizards are perfect predictors) the it is imminently logical to believe that the perfect predictor will be perfect.

Yes. Look, we actually agree, I just don’t think I’m using the word “logic” properly (but it’s the best one I could find). I’m just saying that the logical reasoning underlying the two-boxer argument (the contents of the boxes have been set, so taking both boxes always maximizes profit) simply doesn’t produce the right result anymore, even though the argument itself makes perfect logical sense.

In such a universe, where “a wizard did it” can override “this follows logically from the premises” as a way to make the right decisions, things like science, decision-making diagrams, and using the two-boxer argument to maximize profits, go out the window.

But you have a great point: I used logic to come to this very conclusion, and pick box B. So what does that mean? That’s where my brain starts to hurt.

NO IT DOES NOT! Sorry to scream. :slight_smile:

The boxes have been set, with perfect predictive power of whether or not you will take 1 or 2 boxes. So taking both does not maximize profit. If you take both, one will be empty: the Predictor saw that you would take both.

If anyone doesn’t believe that, then they are denying the predictive power of the Predictor and fighting the hypo.

Saying that the value in the unknown box is “X” is misleading. If you assume the Predictor can predict, then you know X. X is $1 million or $0. It only depends on your choice. It doesn’t affect free will. You can freely choose to sit on your paltry $1K.

Your choice doesn’t affect the past. The Predictor (by the very definition) knows what you will do.

*How *does he know? He can only know either by

  1. your choice causing his prediction by way of retrocausality,
  2. your choice being determined in advance, and you not having free will, or
  3. just knowing, even though he has no logical way to do so, because he’s a wizard.

If there’s some fourth way he could know, I want to hear it.

To a one-boxer, no-paradoxer (like jtgain here), the following hypothetical is exactly the same:

No predictor. Now we have, the Producer. Here’s how the gameshow works. You write down on a piece of paper what your choice will be, either A&B or B only. If you pick A&B, the Producer empties box B, and if you pick B, the Producer fills box B with $1M. You are then invited back to the show some time later, at which point the Producer shows everyone what choice you made, and you claim your prize.

To a one-boxer, no-paradoxer, this hypothetical is EXACTLY EQUIVALENT to the one with the Predictor.

They are fighting the hypothetical.

The Predictor need not violate causality. One can observe a system and make predictions about its future state perfectly fine, in our Universe. Depending on the system and on the analytic tools available, one can sometimes make very good predictions of the future. With truly astounding (though still theoretically possible) analytic tools, one could even predict the future state of a human mind.

The problem with the two-box argument is that they assume that there is some time after the boxes are filled, but before the decision is made. If there were such a time, then one would be well-advised to change one’s mind at that time. But there isn’t. By the time the host of the show has given you access to the boxes, you’ve already set down the path of thought that will lead you inevitably to one decision or the other. The decision is, in effect, already made. And the Predictor used that already-made decision to fill the boxes.