To be even more clear: I am not trying to game the system to ensure I get $1,001,000. I am saying “There is a point, after the prediction but prior to me looking in the boxes, where whether or not I get the $1,000,000 is a settled question. At that point, all I can do is decide if I want an extra $1000 on top of whatever is in box B.”
The latter is only true before the prediction has been made. After the prediction of whether you will pick box B is made, whether you get the million is locked in. You can’t change it with your subsequent actions.
In contrast, there is never a point where the outcome of the battle is locked in, prior to fighting the battle.
The Predictor predicted that you would change your mind, and locked in that prediction, or he’s not virtually infallible. You’ll only ever get 1000, never 1001000. Any other result violates the virtually infallible part of the hypothetical.
If he predicted that I’d change my mind, then it’s a good thing I did, because otherwise I’d get $0. If he didn’t predict that I’d change my mind, it’s a good thing I did, because I get $1,001,000 instead of $1,000,000. Once the prediction is made, I should stop letting that influence my decision, because A & B is always better.
Here’s an equivalent restatement of the problem:
The predictor says “I have put $1000 in box A. I may or may not have put $1,000,000 in box B, based on X”.
Regardless of X, you can know for sure that at that point there is more money in boxes A and B together than box B alone. You are being misled by the claim of “a virtually infallible prediction” to believe that the contents of B can change. They can’t change at that point, and X + $1000 is always greater than X.
Not if X is Zero, which is what it will be if you choose both boxes.
In my statement above, X is a placeholder for the reasoning the predictor uses. In the original problem, X = “a vitually infallible prediction of what you will choose…”
My point is, it doesn’t matter. At some point, the boxes contain whatever they’re going to contain. It’s impossible at that point for one box to contain more than that box plus another box.
It sounds like you are betting against the hypothetical - that is, that you can outsmart the Predictor.
0=0
0+1000=1000
1000>0
However you posit the predictor getting the success rate that it has (by checking your phone metadata and e-mail traffic, contacting your family, whatever), the fact remains that it has a success rate of 100-epsilon percent.
So, whatever the non-causality-violating method it has, when I show up it thinks “That RitterSport seems like a nice simple guy. He will just choose B.”
When you show up, it thinks “That tim314 is too clever by half. He looks like the type to think B is the right choice, then waffle a bunch of times, finally choosing A+B.”
Basically, I think the 1-boxers are implicitly denying causality even if they think they aren’t. You’re saying “You shouldn’t open box A (even after both boxes are filled with whatever they’re going to be filled with” because if you do, then box B will be empty." If you believe the future can’t affect the past, then once the boxes are filled you shouldn’t worry about the impact your choice has on the contents of box B, because it can have no impact.
If you’re the kind of person who thinks this way, you’ll end up with $1K. Go ahead and think that way, though! It makes the game more fun for the rest of us.
See the above. You’re dead right in your logic, but your logic nets you $1K. I walk out with $M. Enjoy!
Bingo.
Bingo.
Let’s look at the last 10 contestants, and see how they did!
5 were the type who figured X + 1000 >= 1000 and picked two boxes. How much did they leave with? $1K each
5 were the type who figured that picking B only would net them $1M. They all walked out with $1M.
Which category do you want to be in? There are no other categories, based on the premises. So, there are the guys who get $1M and the guys who don’t. I know which I’d choose.
OK, fine. I accept that as a stipulation of the problem. I am saying, there is a point where he has made that decision, and filled the boxes.
At that point, the $1,000,000 is off the table for me. So I should clearly also open box A, and get the $1000.
As for you, the $1,000,000 is in box B. So at that point, if you open both, you get $1,001,000, which is even better.
You are either saying:
- RitterSport can’t choose to open both at that point. This denies your free will.
- If RitterSport chooses to open both at that point, the $1,000,000 ceases to be there. That denies causality.
If you reason this through, your point is fighting the hypothetical. Of course the hypothetical is absurd, but what the hey.
Of course the Predictor can’t change what’s in the boxes, but we know he uses some magic or juju to be very nearly infallible in his, her or its predictions as to what you will do.
If you are the kind of person whio reasons as you do, the Predictor will know of it and all you can ever get is $1000.
Put it this way: if you pick only box B and it is empty, the Predictor is fallible.
There are 2 ways of looking at it:
- You **can’t ever ever ever ** outsmart the predictor. This is identical to a complete lack of free will, and yes in this case you should take only box B. But have you really decided anything?
- Nobody has ever outsmarted the predictor. Yoy personally believe he has a chance <100%, possibly as high as 99.999999999% of “reading” you. This defaults to the assumption “there are now 2 boxes here, I’m taking box B regardless, I cannot now change the contents of box B, would I like an extra grand?”
If he opens both, then there was never a million in it in the first place.
All you are really saying is that the Predictor can’t really be the Predictor. Which is true, but we are taking it as a fact that it is.
At some point, the million is there, or it’s not.
Does he still have the power to make a choice at that point? If so, the choice can’t make the million go away.
Loved your math, thanks! Made perfect sense. But I didn’t follow this last bit.
If the “predictor” uses a coin flip, I’m picking both boxes, regardless of the (nonnegative) prize values. Am I wrong?
I’ve never said “The predictor will be wrong”. At some point, the predictor has made his choice. At that point, the better choice for me is to take both boxes. You can say, “Well, you won’t ever make that choice if the $1,000,000 is in Box B.” But that’s a statement of what I will do, not what I should do.
If the $1,000,000 is in box B, I still should take both boxes. If the predictor is infallible, then for some reason I won’t take both boxes. But that’s the wrong choice.
If the $1,000,000 is not in box B, I should also take both boxes. And if the predictor is infallible, then I will.
As much as anybody does in this universe. Free will may not exist anyway.
Not really. By taking that extra box, you are potentially going from $1M to $1000 only.
You are banking on the 0.0000000…1% chance the predictor is wrong and still leaves $1M in box B.
There are four cases here:
Case 1: Predictor correct, you choose B - Net $1M
Case 2: Predictor correct, you choose A+B - Net $1000
Case 3: Predictor incorrect, you choose B - Net $0
Case 4: Predictor incorrect, you choose A+B - Net $1,001,000
You are focusing only on cases 3 and 4. The other side on cases 1 and 2.
But in trying to argue for the “extra” $1000, you are now comparing cases 1 and 4.
And that’s the crux of the paradox. You have to make some assumption about the predictor. If you don’t like the idea of an infallible predictor, you are limited to cases 3 and 4. If you imagine such a predictor exists, you are limited to cases 1 and 2.
The arguments that are the most fallacious in this thread are the ones trying to mix the cases, e.g. comparing case 1 to 4.