You are using the expected utility model, which does strongly support taking B2 only.
Dominance means that a strategy is always the best, no matter what the “opponent” has done.
In this case, the Being could have either put the money in the box, or not.
If he did, then you get an extra $1,000 by taking both boxes.
If not, then at least you get $1,000 instead of nothing.
It would be if our opponent had rolled a dice to decide what to do. But he didn’t. He had an almost certain idea of what we were going to do. He has a proven track record in the same situation. If we do something, chances are he’s already predicted it. Clearly we need to play the odds and go with the action that gives maximum payoff if he predicted it.
Let’s put this another way. Instead of the opponent predicting what we do, we have to decide on a choice, write it down on a piece of paper and give it to the opponent. He does whatever he does based on the rules for the problem, and then we have to perform the choice we wrote down. No-one in their right mind is going to argue anything other than that B2 is the correct choice in this scenario.
And that is the situation we’re faced with in the original problem as well: an opponent who knows what we’re going to do and acts based on that.
But he’s not here. He’s already gone home. At the moment taking both boxes guarantees that you’ll get $1000 more than taking box 2 only. Of course, you’ll do it glumly, fully expecting the next box to be empty but it doesn’t change the fact that whichever decision the Being has already made, you’ll still get more by grabbing both.
You responded to my modified version of the problem, but your answer makes no sense in relation to that version (once the opponent has made his choice, we have no choice but are bound by the choice we made earlier), so I’m assuming you’re talking about the original version. In that version, the same thing applies: his decision was based on our decision. That gives us a very clear best strategy.
You do agree, I hope, that in my modified version the correct choice is B2? If so, what is the meaningful difference between my version and the original?
He had already made his predictions before yours and others actions in all the test cases and he was always right. Why does it matter that he’s gone home this time, when he’d also gone home every other time and been right every other time?
Why do you think there’s any reason to doubt his ability to predict what you will do in his absence?
I don’t. I think he’s going to accurately give me an empty box, just like he’d accurately give you $1,000,000. That doesn’t change the fact that when it comes down to making the decision I’ll be $1000 better off by taking both boxes, regardless of what’s in box 2 and so it is the only logical decision. Hence the ‘glumness’ with which I do it.
Priceguy: As long as he can’t see what I’m writing, the scenario is functionally the same.
Alright… But that you’d, personally, rather go with the only infinitessimally less risky yet exponentially less profitable path has very little to do with whether the situation is a paradox or not. All it seems to point to is that you’re someone who takes the “keeping my money in a chest buried in my back yard” approach to life.
Being that the predictor is seemingly omniscient, he would know about Newcomb’s Paradox and what type of person you are (because apparently he has understood all your decisions so far.) So what this problem all boils down to is, if you are a Dominance type person, you will get $1000 guaranteed. If you are the person who ascribes to the expected utility model, you will get $1 million guaranteed (based upon the predictor’s correct choice rate.)
ETA: if you are a person that uses the expected utility model, who after the predictor has left, is exposed to the Dominance principle, then perhaps you are more likely to get the $1.001 million, but then why take the chance?
To my recollection, this is exactly what the scenario was originally designed to illustrate–that these two models for decision making, each seemingly completely plausible and sensible on its own, give conflicting results.
I myself am a one-boxer, but I think the two-boxers are recieving short shrift in this thread. So for this post, I take their side.
No matter what prediction the alien has made based on his understanding of physics and biology, the fact is, there’s some amount of money in front of you now and it’s either contained in one box or distributed between the two boxes. The way to guarantee you will get the maximum amount of money possible is to take both boxes. Leaving one box behind just means you’ve given yourself a chance to miss out on part of the money.
Either the alien predicted I would take one box, or it predicted I would take both boxes. If the alien predicted that I would take both boxes, then by taking both boxes, I get the maximum amount of money available–$1000. Meanwhile, if the alien predicted that I would take only one box, then by taking both boxes, I get the maximum amount of money available–$1,001,000. So no matter what the alien predicted, by taking both boxes, I get as much money as it is possible for me to get.
The money’s there in the boxes now. It’s not as though, by second guessing myself and trying to figure out what’s in the boxes based on my predictions of the alien’s predictions and so on, I can somehow reach back in time and change the amount of money the alien put in the second box. The money is, now, either there or not.
That’s not correct. His decision is based on his prediction of our decision. There is no causal link between my decision and his prediction of my decision–that would require backwards causation.
The key thing to keep in mind when trying to think like a two-boxer is this: My actions, here and now, can have no effect whatsoever on the alien’s prediction, and hence no effect whatsoever on the contents of the boxes.
If I believe in God, obviously I would choose B2, and expect that He has given me as much money as He wants me to have. Since God is the only perfect predictor I believe in, and He matters to me a whole lot more than money, I won’t really care what is in the box.
Taking both boxes seems silly, to me. If you get a million, why do you need another thousand? if you get nothing, why do you need an extra box to carry it around?
This “paradox” arises only if there is backward causality. Since there is not, it is not a paradox but a confusion over what is possible and what is not.
In reality, you’re right. But the problem presupposes the existence of accurate prediction. We have a predictor with a proven track record. He has done this stunt before. He has been right every single time. The only difference between this problem and the one I outlined is that there is still a little tiny chance that the predictor is wrong this particular time.
The problem, not me, says this guy can do what he can do. It doesn’t say whether there is backwards causation or how the hell he does it, but the problem makes us accept that he can. If he can’t, you’re right: go for both boxes. If he can, there is some mechanism (which the problem tells us is irrelevant) by which his prediction hinges on our decision.
But that is wrong. In some way, the specifics of which we’re told to ignore, my actions do have an effect on the prediction. Exactly as if we’d written our choice down beforehand and then had to stick by it after the predictor read it and arranged the boxes accordingly.
I just re-read the problem as stated in the OP, and in fact, the problem is ambiguous. It allows you to think of the predictor either as magically able to foresee what you will do, or else as able to predict what you will do based on your present state through scientific means. These are two different problems, though. I think that you’re saying this as well.
If the predictor could literally see what I am going to do, and make his prediction based on that, then the two-boxers don’t have a case, I think. But as classically formulated, the problem posits an alien being with very very good predictive capabilities, and his predictions are explicitly not based on any direct knowledge of what you actually will do. They are predictions in the every-day sense: Taking what’s happening now, and using that to figure out what will happen next. In this case, I think the two-boxer has a case, no matter how accurate the predictions are supposed to be.
I’ve heard a slightly different version than the one presented in the OP. The version I heard has box 1 and box 2, and the setup is that either box 1 contains $1000 and box 2 contains $1,000,000, if you pick just box 1; OR both boxes are empty if you try to pick both. In other words you can get $1000 and pass up $1,000,000 OR pick both boxes only to find them both empty (if the prescience of the superior being is correct).