Tim, I doubt this will change your mind, but consider the following. Suppose the Predictor is asked who will win the next Super Bowl? He/she/it says XYZ over whomever by a score of 24 to 17. Has the Predictor caused this result or only anticipated it? The former would 'be hard to justify. The game, after all, will be played on the field. Knowing the result in advance doesn’t cause the outcome. So, too, with Newcomb’s paradox. As others have said, it seems to me you’re fighting the hypothetical.
If you acknowledge that no one ever gets $1,001,000, then there is a 0% chance for anyone to get $1,001,000, which means, by definition, no one had the opportunity to get $1,001,000. You’ve contradicted yourself.
If you’re predisposed to pick B, you should stop analyzing it and pick B, because otherwise you’ll end up analyzing yourself out of a million bucks.
If you’re predisposed to pick A+B, you should wear an '“I Love B” t-shirt and talk about B a lot and give the alien every possible impression you’re going to pick B, and then pick B.
Rather than framing this as violating causality, what about doing the opposite and assume that it uses causality?
The Predictor is a machine consisting of some sort of super-MRI scanner and a very powerful computer. It has scanned your entire body including your nervous system and biochemistry to such a level that it can run simulations and predict your every reaction in a given situation. This happens in a controlled environment so that there are no surprises that would change the outcome that the simulations have determined.
Then, outside of the room, the money is placed in the boxes based on what the simulation has determined you will do. Then the boxes are brought into the room and the situation is explained to you (including the nature of the predictor and the scan that was done) and statistics of past experiments are shown and explained to you in a convincing manner.
All of this is, both the introduction of the boxes and the explanations, are done by automated equipment in a manner exactly as was assumed by the simulations. It’s been shown by numerous repetitions that following this procedure has resulted in correct predictions 99.999+ percent of the time.
So we’re running a simulation based on your neuronal and biochemical state at the time of the scan; and how that state will evolve based on known external stimuli. The machine has already run the software that’s running in your head (it’s faster than you) and knows the outcome.
Knowing all of that, and knowing that the Predictor has taken your possession of that knowledge into account, what do you do?
It’s an interesting thought experiment, but the more I think of it, I agree with the people who say it’s something of a cheat. The Predictor is effectively a facilitator of reverse causality. Saying, “Oh, no, it’s at least in the realm of possibility that he’ll be wrong, though the odds are astronomically long and it hasn’t occurred yet” doesn’t really change that. It’s bit of a non sequitur, really (and not just because it’s a fantasy hypothetical). It’s similar to saying, “Suppose there’s an entity that displays all the attributes of omnipotence. But, um, he’s not really omnipotent. By definition. But he still possesses all the essential attributes of omnipotence.” Uh, okay. That could create quite a tortured paradox, I suppose.
Suppose the Predictor could literally see into the future, and sets it up based on what he knows you will do–and, further, suppose you know that’s the case. You’d be a one-boxer, of course. It would get you a million bucks. Well, this hypothetical for all effects and purposes operates the same way. The Predictor is that accurate. Adding, “Oh, not so fast! Maybe this time he’ll be wrong” is, again, a cheat. The Predictor is supernatural; calling him natural doesn’t change that.
So, tim’s syllogism is absolutely right (strictly interpreting the details in this circumstance), but it defies the rest of the hypothetical and he’d be nuts to be a two-boxer. ISTM, again, that this is essentially a non sequitur and a cheat. I’ll tell you later if I change my mind, which has occurred multiple times already.
The two box answer assumes that you can somehow trick the Predictor by doing a feint at the end. The Predictor knows you will do this. He knows all. If you pick two boxes, whether by plan at first or on a lark at the last second, the Predictor sees that you will do this and there is no $1 million.
I don’t see why this is so difficult.
Do we have actual stats on the alien’s predictive abilities? If the results are along the lines of:
Of the people who pick B alone, 99% get $1 million, 1% get nothing.
Of the people who pick A+B, 99% get $1000, 1% get $1,001,000.
This isn’t a hard choice unless you make it hard.
Indeed. If the Predictor is mortal, and has an accuracy rate of greater than 50%, you are better off only choosing one box.
Others have mentioned it, but the $1k is so low in comparison to the $1M that you can effectively consider it zero. Who in that situation with a guaranteed $1M would risk it for the near zero possibility of getting an extra $1,000?
I don’t even see the paradox. The whole “X vs. X+1000” argument assumes equal probabilities. It ignores the fact that the value of X changes (by an extreme amount) depending on your choice.
Is the Predictor correct 99.999…% of the time, or 100%? They are…(woo-hoo! here it comes…) not the same.
Sure. But as I showed earlier, the expected value means that, on average, the predictor doesn’t have to be much more accurate than 50% for the single box to have a better outcome.
It has nothing to do with “equal probabilities”. This is a non sequitur. All that matters is that the sequence of the events in chronological order are
[ul]
[li]Money is put in box (or not)[/li][li]box is sealed. No further tampering possible.[/li][li]You pick up the sealed box.[/li][li]You decide if you want to also take a free $1000[/li][/ul]
I’m confused about the Predictors role. What is the motivation of the Predictor? I see ppl here commenting that the Predictor is trying to minimize payout, I cannot make this out from the OP.
Because if the only motive the Predictor has is to be correct in its prediction regardless of what the payout is going to be, you want to be a 1 boxer millionaire plain and simple. Because choosing both boxes will result in an empty B box. This is only true when assuming the predictor is always right and doesn’t care about the payout.
The predictor’s motivation is not important. If his motivation was to minimize pay out, he would simply ALWAYS predict everyone to be a two-boxer. In essense, The Predictor would simply not exist, and box B would always be empty. This is not the case, obviously. The Predictor’s only motivation is to always be right, that’s it.
A whole lot o’ one-boxing going on here. IRL I am a one-boxer too but I feel like two-boxing is getting short shrift here so I will play the two-boxer’s advocate.
Understand that the alien is not magic, and he is not always right. He is incredibly accurate, but not perfect. He does not see the future, he only sees the present, and uses his intimate and detailed understanding of physics and human psychology to make his prediction.
Under these assumptions, you should take both boxes. For the contents of the boxes are already determined. And since they are already determined, it is impossible for you to lose anything by taking both boxes.
Think of it this way. Why bury the boxes? Let’s just have them right out in the open. You even get to inspect the contents. The alien says the same thing–if he predicted you’d take one, he put a million dollars in it, and if he predicted you’d take two, he put nothing in the first box. So you look inside the boxes–and there’s a million in one, and a thousand in the other. He predicted you’d only take one box! So. Do you take just one box?
Of course not. But why should your being able to see the contents of the boxes change your decision? In both cases, the contents are already determined. It doesn’t matter whether you can see them or not–what’s in 'em is what’s in 'em.
Perfect, unassailable logic. It’s charming, really. But it denies the reality that if you pick up that “free $1000”, that’s all you’re ever going to get, ever.
I get that it’s a paradox. I really do. The people who are saying that the “two boxers” are dumb, or are fighting the hypothetical, or whatever, are mistaken. It’s a real paradox and I cannot give you a solution. I don’t think you are fighting the hypothetical at all, and I don’t see a flaw in your logic. But, you’re still wrong. And I’m still a one boxer.
By the rules outline in the OP, if you will ultimately make the choice take your “free” $1000, the Predictor will have seen that and not placed $1M in the other box. If you make the choice to only open the 2nd box, the Predictor will know that as well and almost certainly placed $1M in the other box.
It seems as your side is ignoring the stipulation that the Predictor is all-knowing. If you are going to take the “free” $1k at the end, he can predict that you will do it and not fill the other box with $1M.
The Predictor is not all-knowing. He’s just merely almost certainly correct in his predictions. In the original phrasing of the paradox, he need not have “seen the future” or anything like that. He’s just very very good at making correct predictions.
Again, go back to the example that both boxes are transparent. If both boxes are transparent, the choice is easy right? If both boxes are transparent, there’s never going to be any money in box B, and you should always choose both boxes, unless you like walking away with no money at all. And if for some crazy reason box B did have a million in it, you’d still take both boxes, and the predictor will have been wrong. But we know that’s never going to happen. But either way, it was definitely better to always choose both boxes, when both are transparent.
So why does making one box opaque change anything? Still always better to choose both boxes.
Repeating over and over that if you choose both boxes, you’re only ever going to get $1000 if you do that, never $1,001,000 is fine. Two-boxers understand that.
Hence, I’m still a one boxer, but I definitely understand and love the two boxer argument.
The OP states that the Predictor is so close to being 100% accurate so as to say that he is 100% accurate. If you take both boxes, there will be no $1M. That is as near to certainty as can be achieved.
The two-box argument assumes a 50/50 chance of the money being in the 2nd box, so you might as well take both. But, what is never mentioned is that the very facts that set up this situation tells you, SCREAMS at you:
If you take both boxes, the Predictor will have known you would do that, and leave the 2nd box empty. There is causation there. Your intent to take the two boxes (whether decided at first or at the last microsecond) will cause the million dollars to not be in the one box. Don’t fight the hypo that the Predictor has these type of magic skills.
Nobody is saying that.
Two boxers are just greedy and smug, thinking they can outwit the Predictor for a whopping 1% extra payout. Makes no sense to be a 2 boxer.