I agree that my criticisms don’t apply to the original Newcomb scenario in the OP. I was talking about the variation proposed by ForumBot’s friend.
That was me, I think.
I wasn’t sure the universe would preserve causality. Rather, I was sure that the predictor in the universe was perfect. That is, after all, the stipulation of the hypothetical scenario. The scenario is described as one in which a perfect predictor exists.
So, if, in this particular hypothetical scenario in which it is stipulated that the predictor is perfect, the predictor predicts X, then, in this hypothetical scenario in which it is stipulated that the predictor is perfect, it must be the case that X occurs. If some reason or way can be found making it also be the case that X can not occur, then we have shown the hypothetical scenario to be impossible. But that has not been accomplished.
If we stipulate the predictor to be perfect, then if the predictor predicts I will take one box, then by stipulation I do indeed take one box. Who knows why. It may be through no known causal means taht I end up taking one box. But the predictor made his prediction, and the scenario stipulates that he is perfect.
You don’t have to get exotic to figure out how it can be that I end up picking one box. Maybe someone ends up holding a gun to my head threatening to kill me unless I pick only one box. Maybe I suddenly have a crisis of faith leading me to change my mind about how many boxes to pick. There are any number of ways to understand how I could come to pick only one box despite a previous intention to pick two boxes.
Since the scenario stipulates the predictor is perfect, we have to conclude that, in the scenario as stipulated, one of these intervening factors obtains.
-FrL-
Apologies chaps. This is why we need avatars on this forum…
I simply said “consider this paradox”. If the paradox could never happen then Frylock could well have said that. Instead (s)he didn’t, and instead stated that I would be unable to choose the opposite of what the predictor had said.
Note that my paradox can be considered equivalent to the Grandfather Paradox, and Frylock’s solution equivalent to the “you would be unable to kill your grandfather” hypothesis. At this time I believe, this hypothesis is considered neither definitive nor equivalent to saying “the hypothetical situation could never happen”.
Right, but the paradox stipulates other things too, and it may be the case that these stipulations are inconsistant.
The trouble with saying that the predictor is perfect is that it’s not enough here, the method by which the predictor makes her prediction is relevant.
This is because there is a self-reference here; the predictor is trying to predict what you’ll do, but what you’ll do depends on what you think the predictor will predict and so on.
It is my opinion that depending on how we define the predictor, this paradox is either nothing of the sort or basically equivalent to the grandfather paradox.
That’s right!
And my point is, the stipulations aren’t (or at least haven’t been shown to be) inconsistent.
What other stipulations do you find in the hypothesis, and how do you think they are inconsistent with the perfect predictor?
[/quote]
The trouble with saying that the predictor is perfect is that it’s not enough here, the method by which the predictor makes her prediction is relevant.
This is because there is a self-reference here; the predictor is trying to predict what you’ll do, but what you’ll do depends on what you think the predictor will predict and so on.
It is my opinion that depending on how we define the predictor, this paradox is either nothing of the sort or basically equivalent to the grandfather paradox.
[/QUOTE]
I’m losing track of who’se defining the predictor how.
Here’s the way I’d state the problem:
-FrL-
I think that one’s answer to Newcomb’s paradox depends on how one distinguishes causation from mere correlation. Everyone agrees that events satisfying the description “the player chooses both boxes” correlate with events satisfying the description “box B2 is empty”. The question is, do events of the first type cause events of the second type? If the player’s choosing both boxes causes box B2 to be empty, then the player obviously should choose only box B2. Otherwise, the player should choose both boxes, since, by definition, doing so will have no effect on their contents.
Other posters are thinking along these lines, but I believe that some of them are taking the wrong approach to determining causation. They are considering the various mechanisms by which the predictor might be arriving at its predictions. This does not seem to me to be a useful approach. The player does not have access to any information about the predictor’s method. Therefore, as the player, I cannot use that information to help me to decide what to do.
To be useful, we should use a standard of causation that relies only on the kind of information available to the player. For my part, I’m inclined to take the sort of pragmatic approach that I lay out below. The upshot is that, according to this approach, the player’s choice causes B2 to be empty or full. Though this implies causation working backwards in time, we only have inductive evidence for our belief that causation always works forwards in time. It is only a tentative conclusion that should be amended if there is enough countervailing evidence. In the scenario described by Newcomb, the player has enough evidence to conclude that causation is working backwards in time. So, if I were playing the game, I would choose to take only box B2.
I’ll now spell out what I understand by “cause”. Call an event satisfying a description D a D-event. For example, a round of the game in which box B2 is empty is a “the box B2 is empty”-event. I say that I cause a D-event if I willfully perform an action A such that, in all relevantly similar situations (real or merely possible) in which A is performed, a D-event occurs.
This definition is intended to capture the intuitive idea that A causes B if, whenever I can make A happen, I can make B happen.
A few notes of elaboration:
(1) My standard for saying that I “willfully performed” an action is only that a certain psychological attitude accompanied the action. I take no position on whether that attitude itself was caused in any sense.
(2) The scope of “relevantly similar situations” is determined by convenience. It means whatever I want it to mean in a given context.
(3) The criterion for causation above is not the only one that I would accept, but I claim that it applies in this case.
Note that, to apply my definition, I need to be able to justify an assertion that begins with “in all relevantly similar situations (real or merely possible) . . .”. But such assertions cannot be directly confirmed, because we cannot observe merely possible situations. However, such an assertion can be justified using inductive evidence, which is gathered through direct observation. (I don’t claim to have solved the [url=]Problem of Induction, but if you won’t grant the validity of induction, then there is no point in even raising something like Newcomb’s paradox.)
Now if I imagine that I am the player, and that my turn has arrived, I apply the above standard as follows. All the preceding rounds that I observed were situations relevantly similar to my own, and there were so many of them that I can use induction to generalize to the conclusion that, in all situations relevantly similar to my own, the box B2 is empty if and only if the player chooses both boxes. Therefore, if I were to choose both boxes, I would cause B2 to be empty. Conversely, if I were to choose only box B2, I would cause it to be full. Therefore, I should choose only box B2.
PS: Thanks for the kind words, Priceguy :).
If this being is God, and God had told me this wanting me to have the $1M then it would be so. God has the book of human destiny and already knows the outcome.
I think this sentence is key. It says that the being has made the prediction based specifically on you. He is not making a general-purpose prediction for the population. He is predicting it based on you specifically. Perhaps he has been monitoring you with super-spy cameras and mind reading devices for your entire life. Based on enough background, he could probably make a pretty accurate prediction with no supernatural forces needed.
Heck, I think I could make a reasonable prediction about which boxes a person would pick provided I knew them well enough. Someone who was very thoughtful and logical–box 2. Someone who was rash and impulsive–both boxes. Irrational compulsive gambler–both boxes.
So I think for this problem, anyone who attempts to figure out the best solution will pick box 2, simply because they are they type to figure it out and the being will know that from his observations. For example, if the being picks Raymond Smullyan, author of the logic book What is the Name of This Book for this puzzle, it’s a pretty good bet he’ll pick box 2.
If someone is not the type to figure out the best solutoin, you’d have to look at their past behavior If you presented it to Britney Spears, I would guess she’d pick both boxes since she always wants it all.
Just to be clear: I’ve never stated that attempting to work out how the predictor’s ability works is a good strategy for playing the game.
What I’ve said is that whether this game truly is paradoxical depends upon the definition of the predictor. And if it’s paradoxical, it’s pointless trying to work out a strategy. It would be like saying: “You shoot your grandfather before you were ever born. What should you do next”?
Fair enough.
I grant that additional properties could be given to the predictor that would make its definition self-contradictory. I think that that happened in ForumBot’s friend’s variation, for example.
Do you believe that the properties given in the OP already suffice to make it self-contradictory? I don’t see it.
:smack:
Oh, sorry, I’ve realised that one of my earlier posts suggests that knowing the definition of the predictor can be used in determining a strategy.
It can, but yes, the original problem tells us nothing about the predictor other than they make perfect predictions.
Doh – but I stand by the rest of what I said in my above post.
There is some debate over what exactly constitues a paradox, but to my knowledge most agree that something like this is in the right area: A paradox arises when there are two apparently sound arguments leading to incompatible conclusions.
If there’s a paradox in the area of the Newcomb problem, it is in this: Two apparently sound lines of reasoning as to “what you should do” lead to incompatible conclusions.
-FrL-
Without bothering to explain to all of you one-boxers why you’re wrong about what the rational choice is allow me to explain why the Newcomb Paradox is considered a paradox. According to orthodox rational choice theory, a rational actor will always choose to maximize her utility, and amongst the things that entails is that she will always choose a dominant strategy when one is available. Since our rational actor cannot have any causal impact on the predictor’s behaviour, she should take both boxes - regardless of what the predictor has done, she will be $1000 ahead if she does so.
The paradox is that faced with the Newcomb situation, irrational actors come out ahead of rational actors. (Note that you don’t need an infallible predictor for this to be the case, so the stuff about predestination and free will is a red herring. Given the dollar totals, any predictor with noticeably better success than flipping a coin will result in one-boxers doing better on average than two-boxers.) The paradox is that the rational choice leads to lower expected utility than the irrational choice, and yet by definition the rational choice is always that with the greatest expected utility.
Most philosophers and economists who write in the field of rational choice theory stick to their guns and say you should take both boxes, based on the impossibility of backwards causality. Nothing you do at the time of your decision can have a causal impact on the prediction, so you should take both boxes based on the guaranteed higher payout. This view gets called “Causal decision theory” and you can google that phrase for entire books written on the subject.
A minority do argue for taking just one box. This position gets called “Evidential decision theory” by the causal guys, who argue that the reason taking just one box is so psychologically compelling (something they don’t dispute, in fact plenty of people I know who have been completely convinced by the arguments for causal decision theory will still say things like, “but if I were ever faced with that decision I’d still just take the one box”) is that choosing one box gives you a sort of evidence that the predictor placed a million bucks in the opaque box. By placing yourself in the group that correlates with getting a million bucks, you make it seem more likely that you’re going to get a million bucks, even though placing yourself in the group that correlates with getting a million bucks cannot possibly have any causal impact on whether you actually get a million bucks.
The strongest argument on the other side of the fence is that the one-boxers will, as a matter of fact, end up with more money. This is commonly known as the “If you’re so smart why aren’t you rich?” argument. My favourite response to this is something Allan Gibbard said: “If someone [i.e., the predictor] sets out to reward irrationality, it shouldn’t be a surprise that rationality doesn’t do so well.” I can’t for the life of me figure out why the Newcomb problem was being discussed, since based on my rather hazy memory of the statement it must have been made in a course on meta-ethics.
I just want to say, this was an absolutely fantastic post. It captures exactly how I feel about the situation, but explains it more clearly than I think I ever could have.
Thanks, Indistinguishable :).
How do you justify that claim? As far as I can see, it is because you are convinced that causality always works forwards in time. But do you have any basis for that belief other than mounds of empirical evidence? Suppose that we can observe arbitrarily many rounds of the game, and each of them gives all the appearance of backwards causality. Eventually, we will have made so many observations that the new evidence will outweigh whatever prior evidence we had for thinking that backwards causality can’t happen, no matter how much prior evidence there was.
Perhaps you maintain that backwards causality is impossible by definition. But definitions are conventions chosen for convenience. A definition that rules out backwards causality might not be convenient in a world with perfect predictors.
That’s what I was saying earlier. Choosing both boxes only guarantees that you win something. It does not guarantee that you win the most that it is possible to win. The situation only seems to be a paradox because of the confusion between mathematically precise terms and everyday language. For instance, why is choosing only one box irrational if you have strong evidence that the predictor makes correct predictions? It’s only irrational in the sense that there is some risk that the predictor was wrong and you choose an empty box and get nothing. This is therefore defining irrational as “eliminating or minimizing potential losses whenever possible”. I say it is irrational to ignore the predictor’s past performance and the strong possibility of coming out ahead in the long run.
I’m afraid the burden of proof is on you in this case, my friend. Extraordinary claims and all that. If you want to postulate a circumstance in which hitherto unknown evidence surfaces for backwards causation and then say that in that case the orthodox response to the Newcomb problem is invalidated, I would tend to agree. If you want to argue that a Newcomb predictor’s success all by itself constitutes evidence for backwards causation, I would have to disagree. Most formulations don’t have the predictor infallible, and merely an extremely good judge of character. It’s not terribly difficult to predict our friend’s behaviour with much better than random success, so merely being fairly highly correlated isn’t anything that would cry out for expectation.
If on the other hand the predictor is always right, every single time, I would propose that this is much better evidence for determinism than it is for backwards causation. I can infallibly predict the answer my pocket calculator will give when I have it solve 4*16. Does that mean that the future actions of the calculator’s circuitry are causing my answer? Of course not - it merely means that the calculator’s circuitry behaves in a perfectly predictable, deterministic fashion.
The sum total of human experience to date tells us that causation only acts forward in time. Some might even go so far as to say this is some sort of necessary truth, a result of the way in which our minds organize our experience. Postulating that you should choose the single box in the Newcomb situation merely on the strength of a high (or even perfect) correlation between choosing one box and getting a million bucks doesn’t strike me as even beginning to be evidence for backwards causation. Until you can either rule out the determinism explanation, or provide a positive demonstration of backwards causation under controlled circumstances, I don’t see any reason why we should even entertain the notion.
Also, all this business about causation is a bunch of bullshit. The only relevant question is whether you think the predictor is right. What difference does it make if he saw the future through a crystal ball or did an exhaustive psychological writeup on every moment of your existence up to the present?
It’s irrational because IF the predictor has predicted that you would only take one box, then there is $1000000 in the opaque box. That money isn’t going to vanish when you choose both boxes. It’s in there. So by choosing just the one box, you’re passing up a payout of $1001000 dollars in favour of the lesser $1000000. That’s what’s irrational.
I have not been advocating for backwards causation, Gorsnak. My previous post was being written before yours was submitted.
No. You aren’t betting on what prediction the predictor made. I agree that’s how it seems, psychologically, and that’s what trips most people up. It feels psychologically as if by choosing both boxes you’re gambling against the odds that the predictor was wrong when he chose whether to put the million bucks in. But that’s not the case. He’s made his decision, and the money is either in the box or it isn’t. If the million bucks is in there and you take both boxes, you’ll get the million bucks just as certainly as if you only take the one box.