Newcomb's Paradox

I see you’ve answered the post. Good! Thank you!

As to the above comments, I think you misunderstood the line “Put Begbert back on the line.” You are right to notice this means the post was directed at you–but it was not a series of jabs, but rather, simply a series of questions. And, to be clear, in case it wasn’t already clear, the meaning of that final line was as follows: If you answer that way to this question, then you’re not Begbert but someone else talking on his phone line, since Begbert’s whole point is to take the position opposite that answer!" It was a joke, and not even a slightly mean one.

Fair enough. It satisfies the intent of my first question to reword it as “Do you think people generally got a million dollars” etc.

Right, this is a good catch. By putting “get” in the present tense, I stacked the deck didn’t I! I’m glad you’ve pointed this out.

So taking the question to be reworded in terms of “got,” it looks like your answer to this latest “why not?” question is something like this: “Since I (Begbert) don’t know how the alien is making his predictions, I don’t know whether his prediction was accurate in my case. It may be that I live in a universe in which all his predictions up til now have been accurate, but I will be his first innaccurate prediction.”

If I’m reading you right, that’s an interesting reply. I’ll reply as follows. This kind of reasoning seems very impractical. Maybe I’m in a universe in which gravity works til noon today, then afterwards stops working. So I don’t know that gravity will keep working. So I shouldn’t take actions based on the assumption that gravity will keep working. Right? Of course not. The flaw in that reasoning is that it flies in the face of the kind of induction that we must, of necessity, engage in in order to make any sense of the world. But I think your reasoning above amounts to the same kind of mistake–it is to disallow exactly the same kind of induction. Of course weirdly predictive aliens are by no means an everyday occurence which we need to be ready to know how to deal with. Still, if you are going to disallow induction in that case, but not in the vast majority of cases in whcih you perform induction every day, you need to come up with a story that justifies your application of this (what amounts to a) double standard.

That’s quite right–even on the reworded version of question one, the flowchart ends here so to speak. I wrote the post fully expecting the process to end somewhere short of the last line of the post.

I think the considerations above are worth discussing before moving on to what you wrote in the remainder of your reply to me. As you said, the stuff that comes later in your resopnse is a bit moot since it is predicated on assumptions you yourself do not allow.

Thanks for taking the time to address my post.

-FrL-

Oh, and I would like to point out that, while a perfect predictor is very likely impossible, that is not why the one-box approach is flawed. The reason it is flawed is because it presumes reverse causation, which is impossible, and worse yet, because it ignores the conditions of the problem.

The one-box solution sees this as a situation where a person sitting there has a choice: they can either take one box that has a million dollars in it, or they can take two boxes, one of which will be empty, and one of which will have a thousand dollars in it. The problem with this analysis of the situation is that it is expressly stated in the problem that this is not the choice you are making. There are only two boxes in there when you make the choice and their contents are fixed. One of two scenarios is possible: A is filled or empty. In either scenario, A+1000 > A.

Is is never a choice between a million dollars and a thousand dollars; the only choice you can make is whether to take the $1000 box or not. Imagining otherwise, that your present decision can retroactively empty or fill box A, requires belief in reverse causation, which is not only absolutely impossible, but it is also explicitly defies the conditions of the problem. Sorry, no. You’re not choosing whether to get a million dollars or not; that decision was made before you arrived. You’re merely deciding whether to take the thousand.

This is true regardless of the accuracy of the predictor. All the million-takers left a thousand on the table, where they had the free choice to take it, and if they had, the predictor would have been wrong and they would have been richer. Once the million was in front of them, they did not have the option of losing it by their choice.

I understand.

I think my comments above about induction are relevant here, so I’ll just wait to see what you have to say about that.

-FrL-

Absent the impossibility of reverse causation, and since the boxes are already filled, the only way the predictor can ensure his accuracy would be to force my decision to be one way or the other. As that doesn’t appear to be possible for him to do, I have to assume he’s fallible. (I would be proved wrong it I found myself unable to take both boxes, but that’s another scenario entirely.)

Essentially, I have just proved that he’s fallible, induction be damned. So there must still some small chance he messed up and filled box A. If he did, I’ll still get the big money if I take both. If he didn’t fill box A, then leaving behind box B won’t change that fact. It’ll just leave me poor.

Like I’ve said, the higher the accuracy of the predictor, the less surprised I am that he left box A empty. But that doesn’t change the fact that if box A was full, I wouldn’t have to throw box B away to get that money. I get box A automatically.

You’ve said here that there’s “still some small chance he messed up”, implying that you still think the chance of him getting it right is very high.

So, my question is, do you believe there’s any correlation at all between what a player does and what the predictor predicts? That is, if we were to speak in terms of probabilities, do you think the conditional probability P(predictor says player will take both boxes | player actually takes both boxes) is any different from P(predictor says player will take both boxes)?

Since there is no backwards causality, and since correlation without some form of causality is useless as a predictor (being merely coincidental), you must be asking whether I think the predictor has either 1) some way of controlling the chooser’s decision, or 2) some way of basing his prediction on the same factors that the chooser is going to use to make his choice.

1 seems unlikely, but I don’t have to do anything to find out; if the predictor is somehow going to force me to choose only one box, He’s not going to need my hep to make it happen. (At least I could expect to get a million dollars to salve my damaged sense of autonomy.)

2 seems highly probable, if the guy has been making an admirable record of accurate predictions. The thing to note, though, is that no matter how much research he does on us, we still (absent case 1) have the option of choosing whichever option we want, and piffle to the correllation. Corellations aren’t laws of nature; they’re a way of statistically taking note of real trends. And real trends can alter.

So, yeah, the guy probably read this website and knows that I don’t believe in fairies. So box A will probably be empty for me. Which is all the more reason I should take both boxes; I want to get something out of the deal, right?

Edit: Which is not to say that you should not take both boxes. You have no personal investment in keeping the predictor accurate, after all, so why not take the extra thou?

Why not decide right now that if you’re ever in this position, you’ll take B alone? Then if the moneybags alien robot ever does show up, he’ll likely predict you’ll take B because you already decided to, and Bob’s your uncle.

Seems to me thinking about it too much just runs the risk of you arguing yourself out of a cool mill.

Well, at least by resolving to just take box B you’d take the speculation and worry out of the equation…

…but you might run into the minor problem that taking only B isn’t one of the allowed options! :smiley:
(Box A = $1000000/$0, Box B = $1000)

Well, whatever that designation is, decide to take the opaque box alone. The OP actually identified this as B2, with B1 being the transparent $1000-box.

Because whatever you decide now, when it finally comes down to it you’ll want to maximize your money and to a certain mindset that means taking them both. It’s like the dilemma about whether you can intend to take a poison, even though by the time you do it you have no practical reason to do so.

Well, the optimal possible solution would be to convince the predictor that you were going to take just the one box, and then take both - however, there are probably those who would assert that this is impossible to do.

Myself, I’d be taking both boxes, as it’s the way to get more money. (More is more, and all that.) Though there is some ironic appeal to trying to just grab the transparent box and running, while thumbing your nose back at the teasingly opaque box that you left behind. Fie upon thee, foul temptation! :smiley:

(I never heard of the poison dilemma.)

It goes like this:

“Suppose a wealthy man offers you $1 million if at midday Monday you intend to drink a poison at midday Tuesday. The poison will make you violently ill for a few hours but cause no lasting damage. The benefactor has a 100% foolproof device to determine whether or not you really intend to drink it. If you do intend to drink it he’ll give you the money immediately on Monday and he has no further interest in whether you actually do it. You know this is going to happen so could you actually intend to drink it knowing that you will have no reason to do it?”

Ah.

Answer: do whatever it takes to convince yourself that you’ll do it. For myself, I believe that one is still obligated to do a service even when one has been prepaid for it, so I could easily frame the situation in my mind that way, and get the cash.

(I’d probably even drink the “poison”, when the time actually came. A few hours’ illness? Small price to pay for a mil.)

I wrote the following yesterday before the poison dilemma was mentioned, but the board was down so I couldn’t post it. I think it fits in nicely with the direction the conversation has taken since then. When I wrote it we were up to post 185 or so.


Reading these last few responses, a thought struck me. I think the paradox occurs partly because of differences in interpreting when the situation happens and to whom.

The paradox presents the situation as a hypothetical presumed to be happening to you at the present time. The prediction has already been made, the previous predictions have already been proven right or wrong, the boxes are full (or empty), and the decision of which box to open is made as soon as you say it is made.

The problem is that it hard to consider it that way. It isn’t happening to you right now, and it can’t because there is currently no predictor with a track record you are already aware of (which required by the setup). So it is natural and almost inevitable to begin thinking of the problem as an unlikely but possible future event. This changes the issue subtly but very significantly. If the situation hasn’t happened yet, then what you decide to do now can influence the choice that a hypothetical predictor will, but presumably hasn’t yet.

This gives me, at least, a very powerful but previously unnoticed motivation to resolve the paradox in favor of choosing one box. The paradox implies that what the predictor does is based on what sort of person you are prior to the point at which you make the decision. Since I am considering the paradox as a hypothetical future possibility (even though I hadn’t previously realized I was doing this) I immediately see that it benefits me in that hypothetical future to now be the sort of person who will chose only one box. Furthermore, to the extent that I consider the situation to be a real (if infinitely unlikely) possibility, it makes sense that I must really be convinced that I will take one box.

It might even make sense, having made that decision previously and then been presented with the choice given in the paradox, to really follow through with that decision and actually take only one box, since I lack the self-awareness to know for sure, if I take both boxes, at what point I first began to consider that choice. The only way to be sure that I really am the sort of person now who will take one box is to actually be the sort of person later on who does take only one box, despite the fact that it can’t change anything about the past.

All of this is very powerful psychologically, despite the fact that in the situation actually given, in which I am presented with the choice without ever having had the opportunity to consider it, it only makes sense to choose both boxes. One way to overcome this psychological bias is to reframe the paradox so that it happens to someone else. What do you want this person to do (or have done, since it can now occur entirely in the past)? Reframed this way, it seems much clearer to me that taking both boxes is the only reasonable thing for someone to do.

I think you’ve nailed it. Any rational chooser wants, more than getting that last thousand, for the million box to be filled. So, there is a strong temptation to try and take an action that “will fill the box”. The problem being, that at the time of the choice in the scenario, it’s too late to do that; it’s already been filled, or it hasn’t. So you have to reconcile yourself with the fact that you can no longer effect the box’s contents, and therefore, the rational choice is to make the choice that gets you the extra thousand.

I think a good question to ask is, what if both boxes were transparent? So if there was a million or not, the chooser would know, either way. Would any reason remain to leave behind the $1000 box?

I’ve argued that Newcomb’s Paradox isn’t really a good starting point for a discussion of free will. After all, the story seems to presume compatibilism or hard determinism. A good starting point for a discussion of free will would not presume libertarianism false from the outset.

But now I think there’s a bit more to say about free will in connection with this problem than I thought. The connection is as follows: I suspect that taking the position that one should take both boxes (or at least, taking this position for the reasons usually given by those who do take this position) commits one to a Libertarian conception of free will, and to the position that such free will actually exists.

The argument goes like this:

Lets assume that the thing to do is to take both boxes, and that this is the thing to do because there is no reason to leave behind the extra thousand in Box A.

Let’s also assume that the alien has, on countless previous occasions, made assertions about his subjects’ future choices, and in each case, his assertion turned out to be true.

Hence we can see that, in the past, every person who took only one box got a million dollars, and every person who took both boxes got a thousand dollars.

The truth of the alien’s assertions is either due to a causal connection of some kind between his assertions and his subjects’ actions, or else is not.

If the truth of the assertions is due to some such causal connection, then it is fair to say not only that people who took only one box got a million dollars, but also to inductively generalize to the principle “People who take only one box generally get a million dollars.”

If it is fair to inductively generalize to that principle, then it is fair to conclude “If I take only one box, I will probably get a million dollars.”

This implies that the thing to do is to take only one box, leaving the thousand dollars in Box A behind.

But our first assumption (representing our commitment to the two-boxer point of view) says this is not the thing to do.

Hence it is after all not fair to say “If I take only one box, I will probably get a million dollars.”

So it turns out it is not fair to inductively generalize to the principle “People who take only one box generally get a million dollars.”

This means, in turn, that the truth of the alien’s assertions was not due to any causal connection between his assertions and his subjects’ actions.

If there is no such causal connection underlying the truth of the alien’s assertions, then the truth of the alien’s assertions is merely coincidental.

If the truth of the alien’s assertions is merely coincidental, then he has no predictive power.

Hence, no being could exist which both has the power to predict human actions and also embarks on the kind of procedure described in the Newcomb example of presenting people with one-box/two-box type choices etc.

So every being either has no predictive power, or never enacts a Newcomb-style game, or both. In other words, if a being has predictive power, it can not enact a Newcomb-style game. (Here is what it means to enact a Newcomb-style game: It is to believe one has made an accurate prediction of a subject’s action, to communciate this belief to the subject, and to present a choice to the subject of the kind discussed in the Newcomb scenario.)

But any being could enact a Newcomb-style game.

Therefore, no being can have the power to extremely accurately predict human actions.

That’ll do for starters. I sense holes in several places, but I’m fairly confident of the general idea. Commitment to the two-boxer view seems to commit one also to the view that human actions are in principle unpredictable, and this at least “smacks of” a libertarian view of free will. (Where does that expression “smacks of” come from, and how can I get rid of it, and what can I replace it with?)

-FrL-

In terms of the political term “libertarian”, it seems odd to me to say that libertarians don’t think human actions are predictable. After all, they’re basically the ones behind all that business about praxeology.

But I guess the philosophical term “libertarian” has a rather different meaning.

Sorry, I should have clarified that “Libertarianism” actually has (at least) two distinct meanings. One is the political meaning you reference. The other refers to a particular position on free will, namely, the position that free will requires that human actions not be causally determined. You can be a libertarian about free will and also believe there is no such thing as a free will. (You’d probably be a hard determinist in such a case.) But generally, if someone is referred to as a libertarian in the context of the free will debate, it is meant that they hold the libertarian view as to the nature of free will, and also that they believe we do in fact have such a free will.

-FrL-

Oh no, you caught me in the middle of an edit. Now you’ve captured my soul, or something!

I would note here that your disproof or determinism (or whichever terms you’re using) specifically relies on reverse causation, and thus only disproves guaranteed perfect prediction. It doesn’t mean that people are totally random and unpredictable; some people are very predictable, especially if you know a lot about them. It just means that no prediction can be ‘unbreakable’ to the degree that it must hold even if the agent under discussion has reason to act in contradiction to it.