Newcomb's Paradox

One of the peculiarities about this problem is that almost everyone thinks the answer is trivially obvious.

They just don’t agree.

Frylock and Alan Smithee have already made the points I was going to make in regard to other comments. I agree that I did not make it clear enough that we are not assuming a perfect predictor, but rather that we are confident in his ability because his success justifies such confidence. Also, the problem does center on how the expected utility and dominance arguments are both so compelling, yet reach opposite conclusions. I am still unable to explain why I would take the one box only, when I find no fault in the dominance model.

There is no dominant strategy in this game. In fact, if you define the goal as outsmarting the predictor, you automatically lose.

As for the transparent boxes version, all that does is demonstrate that you cannot have both a perfectly omniscient predictor and a perfectly rational box chooser. An irrational chooser might pick an empty box because he thought he saw a million bucks in it, or because he is a billionaire and doesn’t need the money but likes the box, or to be a smartass, or who knows why.

So what’s the paradox again?

Taking both boxes is dominant, because no matter what the predictor did a week ago, you get $1,000 more by taking both boxes.

But the goal is not to outsmart the predictor. In fact, those who accept the dominance argument fully expect to only get $1,000. It is with a heavy heart that they would choose both boxes.

The transparent boxes version merely demonstrates the logical impossibility of a prediction remaining true when you tell the subject what prediction you made. It loses the essence of the problem. In the original problem, the actual prediction does not affect your choice. Your suspicion about the prediction might affect your choice, but that is perfectly reasonable.

If the predictor always knows what choice you will make, then choosing both boxes is clearly not always the choice with maximum return. It is merely the choice that guarantees some return.

I’d go for the box containing just the 1m because with 1m an extra 1k is negligible. If somehow this alien lost his ability to predict, I’d be screwed an amount I could conceivably earn in a few days. I’d bet against the odds of a negligible loss.

That makes that choice even more difficult to understand. All they have to do is pick one box, and they’re almost certainly up a million. Given the predictor’s proven track record in exactly the same situation, taking both boxes makes no sense.

Well, logic isn’t hard to understand. But, it only give you a correct answer, and then only if your facts are correct and your logic is valid.

I chose the answer that I felt was right. My faith tells me that there is no perfect predictor save God alone. That is not based on logic. It’s odd to be told here, on this board that my faith is logical. That hasn’t happened before.

However, I did not give an answer that was true. In truth, I would make no choice, nor take any boxes. Those reasons are not based on logic, either. But, it turns out that logic would give the same answer. Experiments on motivation are not carried out to benefit the test subject. This event, as described is much more likely to be such a test, or simply a scam with an unidentified target. I wouldn’t be there. I would have had no reason to believe that anyone would give me money, nor that someone who can predict the actions of people perfectly exists.

So, the logical answer is not the true answer. The truth is that I wouldn’t participate.

Tris

No, there’s a fundamental difference. In your coffee/tea paradox, you hear what the predictor says before you make your choice. So your choice can be amended by his prediction. In the money box “paradox” you have to make your choice without knowing what the predictor did so you can base your choice on his prediction.

An actual equivalent would be if the predictor said, “I filled this thermos with coffee or tea. I predicted which drink you’d pick from those two choices and I filled it with that one. Now tell me what your choice is and then I’ll open the thermos and prove I predicted correctly. And I knew you were going to try to fool me when I made my prediction and I took your attempts to fool me into account when I filled the thermos.” You can’t simply contradict his prediction because you don’t know what it is. You can try to outthink him by picking the drink you think he wouldn’t have predicted but to do this you need to predict his choice just as he predicted yours.

Now take the thermos paradox a step further. Suppose the predictor tells you that he filled the thermos with Kool-Aid. According to Wikipedia, there are currently 105 different flavors of Kool-Aid. The predictor says that his game is that you can guess any one of those flavors and then he’ll open up the thermos and show you that he filled the thermos with that flavor. And he’ll bet you any amount of money you want to wager that he predicted your guess. If it’s any one of the other 104 flavors than he’ll pay you that amount of money. The only stipulation is you must make the choice yourself by some non-random means.

But he tells you he’s a genius for predicting what kind of Kool-Aid people will guess. He says he’s played this game millions of times and never lost a bet.

Assuming that you can establish that everything he’s said is correct and on the level (he has no psychic or divine or supernatural or sci-fi powers and there really is only one type of Kool-Aid in the thermos and it’s already been filled) would you make a bet with him and, if so, for how much? Rationally, the odds are heavily in your favor. There are 105 choices you can pick and he’s only got one winning choice. But he’s won this game millions of times already.

I agree. ForumBot’s friend’s version isn’t really a paradox. Or, at least, it only shows the logical impossibility of a very restricted kind of predictor. What it rules out is a perfect predictor that is forced to express its predictions in ways that will prevent its own predictions from coming true. (This is expressed in item 5 of ForumBot’s post: “5. If the predictor predicts, with 100% accuracy, that agent X will select one box (and one box alone), it places $1M in the blue box.”)

The core of ForumBot’s “paradox” is captured by the following simplification: A perfect predictor predicts whether it itself will utter “0” or “1”. If it predicts that it will utter “0”, it expresses this by uttering “1”. If it predicts that it will utter “1”, it expresses this by uttering “0”.

Such a predictor is obviously impossible, but that isn’t a profound observation. It certainly doesn’t rule out the possibility of an omniscient god, since we cannot assume that such a god will be forced to express its predictions in prescribed ways.

Very good objections, everyone! I’m not sufficiently trained in logic to defend these claims, although most sound like good ones, but I’ll throw them by my friend and a couple of professors to see what they say. I know that I disagree with a few of them, but I’m not confident that my answers are any more correct, so I’ll keep quiet until I do know.

There are three options, so you have ‘0’, ‘1’, and ‘2’. Choose box 1 (0), box 2 (1), or both boxes (2). The predictor can choose these options because they are the only available options to the chooser.

I wasn’t trying to formalize all the elements of your version. I was proposing a simpler version that dispenses with some of the details. I don’t think that any of the dropped details contribute to what generates the contradiction.

In your friend’s version, the predictor is forced to express its prediction in a way that guarantees that the prediction won’t come true. If the predictor predicts that agent X will take only one box, the story forces the predictor to act upon that prediction (i.e., to express it) by putting money in box 2, guaranteeing that agent X will take both boxes, and so invalidate the prediction. If the predictor predicts that X will take both boxes, the story forces the predictor to express that prediction in a way that will guarantee that agent X will take only one box, again invalidating the prediction.

If you remove the condition that the predictor is forced to act upon its predictions in a certain constrained way, the contradiction disappears. That is what my simplification was intended to capture. Moreover, the pre-specified constraint in your friend’s version makes the predictor a poor model for a god. No god that I know of is supposed to be constrained to act upon its predictions in such a self-invalidating way.

Excellently put. Kudos.

I’ve asserted no such thing – I just mentioned a (paraphrased version of) a well-known paradox.
One escape from such a paradox is to say that the hypothetical situation could never happen, and I’ve nothing to say about that.

You, on the other hand, asserted in the hypothetical the universe would somehow preserve causality. And I wondered how you were so sure about this.

So it isn’t actually valid for you to try to turn my words back at me.


As for the paradox of the OP, I think it’s pretty clear that if the predictor has no causal connection to the future, then you should take both boxes. No matter what you do, you can’t influence her prediction, and you’re better off in both cases if you take both boxes.
On the other hand, if the predictor somehow has a connection to the future, then it gets ugly. But as I was originally trying to say, it gets ugly purely because we have a causal loop and not really because of the rules of the game or the accuracy of the predictor.

A third case is if you’re playing the game multiple times. In this case, it obviously makes sense to take one box only some, if not all, of the time.

Actually I’ve realised that I’m oversimplifying here. I guess it comes down to exactly what the original paradox means by a perfect predictor.

  1. A perfect predictor can directly see into the future.
    This straightaway opens the door for paradox, and it’s not at all surprising that a paradox is possible within the specific rules of this game.

  2. A perfect predictor uses skill and judgement to make predictions and has always been right in the past.
    Take both boxes. You can’t influence her prediction.

  3. A perfect predictor has complete knowledge of the present, and the universe is deterministic.
    In this case, just take box 2. It means that the prediction was kinda self-referential, but there’s no contradiction or possibility of failure.
    A different game to Newcomb’s might have more of a problem with this self-reference…hmm I may yet get a paradox named after me… :wink:

Double post.

That’s not dominance. It’s the same as the prisoner’s dilemma . The dominant choice isn’t would probably be made with regret.

It wasn’t me, it was Frylock, but never mind. You posited a perfect predictor, and Frylock pointed out that if such a thing exists, then there is also something that keeps the predictor from contradicting himself. Without it, there can be no perfect predictor. By the way, this is exactly the same thing as saying “the hypothetical situation could never happen”.

I think you’re overreaching here. An absolute perfect predictor might be a logical paradox but that’s not what we’re talking about here. The OP is talking about a limited predictor - one who has demonstrated the ability to predict other people’s reactions within a specific frame.