Newcomb's Paradox

No, I’m saying that the means he uses are irrelevant. The problem puts us in a world where accurate predictions exist. That’s what we have to work with.

If he’s that good, he’ll know we’re going to take both boxes (he knew all the other times), so it’s still better to totally and fully intend to take the second box, and just to make sure, we’ll actually do it. But now we’re drifting into that “intend to take the poison tomorrow” problem.

That’s even less of a problem than this one. Box 1 is the right choice.

EDIT: Provided the predictor is perfect, of course.

What do you mean by “when it comes down to making the decision”? It looks like you already have made the decision, before you’ve even met this very impressive entity. You’re not making his job of prediction very difficult, when you tell him the answer in advance.

When it comes time to make the decision, I can choose between getting a grand and a megabuck. That’s all there is to it. All that the (near) perfect predictor adds to the problem is that its existance implies that I make my decision earlier than it looks like I make it.

I would pick box 2.

But I would also place some side bets on the outcome.:wink:

Pick box 2, get a million bucks.

What’s the issue again?

A rational actor would want to maximize his gain and minimize loss. This is why it is very important for the boxes to carry a $1 cost of selecting. The problem isn’t “is there a way for the predictor to be right”, but in this instance, can the predictor be wrong?

The OP has poorly worded the paradox and omitted several critical details. Here is the summation given by my friend Corey, who is attaining his PhD in Logic and assures me is the most powerful argument against omniscience. He personally modified it to show the concept of God, in particular, to be logically incoherent, but the general gist is omniscience.

People have a hard time accepting the possibility that somebody else might be able to predict their actions better than they can predict their own actions.

You might intend to take action X all along, and then, later, take action Y. Is it possible that another person could have anticipated the switch?

Sure. When your screwup cousin Louie borrows $500 from you, what do you think the odds are you will get paid back? Pretty damn low, but at the time he borrows the money, Louie may very well think the odds are pretty high.

That’s why I would pick just the black box. Even if reverse causality is technically impossible, the “Being” knows me so well that I may as well act as if it exists.

You don’t have to be aware of the being’s presence to make your decision. I have a hard time understanding why any of you would take only one million dollars when you could increase your gain by one thousand when you have literally nothing to lose (minus the $1 choosing fee.)

ForumBot, that version makes a whole lot more sense, but I disagree with the conclusion. If the predictor is in fact omniscient and 100% accurate, then it is impossible for the agent to do anything other than what the predictor predicted. Otherwise, the predictor isn’t omniscient and isn’t 100% accurate.

That is a classic mistake, mistaking omniscience for omnipotence. The act of predicting is purely a mental one, in assessing what will happen in the future. An act of predicting which binds the future and controls actions is an act of power, a function of omnipotence.

That’s what I’m sayin.

If I pick just box 2 I get a $1mil and if I do something different I get $1k.

Where’s the deliberation come in?

Ah, now that’s a paradox. :cool:

@ ForumBot

Let’s set up a few logical statements.

With only two choices and two outcomes, we can say:

I select both boxes = NOT C
I select box 2 = C

B2 has $0 = NOT R
B2 has $1000000 = R

In both cases Box 1 has $1000, but it’s irrelevant to the outcome.

Perfect Predictions:

1.If C, then R
2.If NOT C, then NOT R

If I take Box 2:
3a. C
therefore, R.

If I take both:
3b. NOT C
therefore, NOT R.

ForumBot’s version : I can see the boxes, and choose spitefully:
3c. If NOT R, then C

4c. If NOT C, then R (from 3c)
5c. therefore, R (from 1 & 4c) (I chose an empty box and won a million!)

or the other way:
3d. If R, then NOT C

4d. If C, then NOT R (from 3d)
5d. therefore, NOT R (from 2 & 4d) (Hey! My money’s gone!)

Indeed, a perfect predictor, no matter what I do.

If our alien can’t predict perfectly:
Let p© be the probability from the alien’s perspective of what my choice is:

If p© > .5, then R
If p© <= .5, then NOT R

I decide to take both no matter what:
If NOT R, then NOT C
If R, then NOT C

Therefore NOT C.

If the alien thinks this way, p© = 0, therefore NOT R.

If the alien has no information about me, then why not take both? And the alien can save $1000000 by never putting anything in Box 2.

I’m not mistaking omniscience for omnipotence. I’m not saying the predictor causes the choice or restricts the choice. I am saying that an omniscient predictor would see the whole situation, including the transparent boxes and their contents, and see what the agent would end up doing. If the agent doesn’t do that, then the predictor obviously isn’t omniscient.

If the predictor predicts with 100% accuracy, then if it has predicted that you will pick one box, then you will pick one box. Your friend can insist a rational agent will pick two boxes instead, but your friend is simply stipulating that the agent can contradict the predictor. It is not necessarily true that the agent can contradict the predictor–rather, what is necessary according to the hypothesis is that the agent will pick one box just as the predictor predicted. This means that some force or consideration will intervene, making it seem better to the agent to pick one box just as predicted.

-FrL-

There is no paradox in ForumBot’s version. The predictor is also an actor whose actions affect the outcome; being omniscient means that the predictor knows what you will do given any action that the predictor takes–it does not mean that the predictor can foresee a given outcome and then act in a way that precludes that outcome, and therefore make himself wrong.

Imagine you are the predictor. You predict with perfect accuracy that IF you put money in both boxes the subject will take both boxes and that IF you put money in only one box the subject will take only that box. You cannot, therefore, logically do what you proposed and place money where I won’t take it, but that means that you cannot do something logically impossible, not that you lack perfect knowledge. Even God is not traditionally said in Christian theology to be capable of doing what is logically impossible.

Consider this paradox:

A being capable of seeing into the future with 100% accuracy offers you the choice of coffee or tea. She also (truthfully) tells you what your choice will be. But for a laugh, you vow to choose the opposite of whatever she says.
What happens?


I think that Newcomb’s paradox boils down to the same thing. All this stuff about the most rational choice is a red herring: a paradox is implied the minute you put a perfect predictor together with an intelligent agent aware of the predictor’s ability.

That’s why the original formulation did not include a (known) perfect predictor.

As for your example, what happens is some consideration intervenes leading you to choose what it was predicted you would choose.

-FrL

-FrL-

Without a known perfect predictor, what’s the paradox? Any rational agent would take both boxes.

There are many ways out of the paradox, but none of which that can be said to be “right” at this time.

To your escape, I would say that there is no known physical phenomenon that would act to prevent me reaching for tea after hearing the prediction “coffee”. If you prefer to believe that there is one, that’s up to you, but I don’t see how you can be so sure about it.

Box two is the right choice if the predictor is merely good.

There is no known physical phenomenon by which anyone could see the future with 100% accuracy. If you prefer to believe that there is one, that’s up to you, but I don’t see how you can be so sure about it.