That ignores any calculations you could make about the odds of whether the predictor was right or not.
Explain how your behaviour at the time of the decision is going to have an impact on the predictions prior actions. His prediction is made. It’s either right or wrong. Choosing the single box at this point cannot make it any more likely that he predicted you would when making his prediction in the past - unless you think that backwards causation is possible.
I’m not saying that your behavior at the time of decision somehow reaches back to affect the prediction. I’m saying that the prediction includes your behavior at that time. Through whatever means, and they don’t have to include actual backwards causation, the predictor believes that you will waffle, or not waffle, or think carefully, or make a snap decision, or whatever it is, and then make the choice that you make.
Okay, so let’s suppose that the predictor is using psych profiles and happens to be really, really good at what he does. He analyzes your profile and predicts you’ll just take one box, and so puts a million bucks in it. You choose just the one box, and get a million bucks. What I say to you is that if you’d chosen both boxes, you’d have had one million one thousand bucks instead of just a million. Given the situation you were actually in, i.e. choosing between $1000000 and $1001000, you chose the worst possible outcome.
The same predictor analyzes my profile and predicts that I’ll take both boxes, and so leaves the second box empty. I choose both boxes and come away with $1000. If I’d have followed your advice and taken only the single box, I’d have come away with nothing. Given the situation I was actually in, i.e. choosing between $1000 and $0, I chose the best possible outcome.
So now, please explain to me how choosing just the one box is going to make me better off. It made you worse off, and would have made me worse off. You came away with way more money than I did, but that was because of what the predictor predicted, and not what you actually did. It’s unfortunate for me that the predictor predicted I’d choose both boxes, but at the time of my decision I can’t do anything about that. I may as well take the consolation prize. It’s fortunate for you that the predictor predicted you’d only take one box, but at the time of your decision you can’t sabotage your million bucks since it’s already in the box, so you’re leaving a free thousand bucks lying there that you could have taken without penalty. There’s no way for either of us to actually move between the alternatives. I can’t get myself onto your side of the payout regardless of what I do. You can’t do anything that would dump you onto my side of the payout.
People like yourself are looking at this as if there’s a 4-way payout matrix, two of which are extremely unlikely ($0 and $1001000), so assume they’re essentially choosing between a thousand bucks and a million bucks. People like myself realize that it’s actually just a 2-way payout matrix, just that we don’t know which 2-way payout it is. But regardless of which one it is, we know that taking both boxes will mean we come out ahead. The person making the decision can’t choose a million bucks - only the predictor can do that. The decision is just whether to take the additional grand.
Gorsnak, I propose that, for the sake of simplicity, we restrict ourselves to the version of Newcomb’s paradox in which the predictor has historically always been correct throughout arbitrarily many previous rounds.
First, determinism and backwards causality are not mutually exclusive. On the contrary, I would grant that a universe with perfect predictors would be a deterministic one. Therefore, pointing out that a certain body of evidence supports the hypothesis of determinism in no way obviates its supporting the hypothesis of backwards causality. (Unless I don’t understand what you mean by “determinism”, in which case, please elaborate.)
Second, I would nonetheless deny that your calculator example counts as backwards causality. It certainly doesn’t count as the kind of causality that I defined in post #85. This is because there are relevantly similar situations (perhaps merely possible ones) in which your calculator solves 4*16 without your having predicted it. Intuitively, I would only say that the calculator’s solution caused your prediction (according to the definition in post #85) if I could make you make that prediction by making your calculator compute that solution. But, as a practical matter, this would be an unreliable method. For example, you might be distracted, or you might not be able to see what I entered, and so on.
I agree, but it is nonetheless only a finite amount of experience. Even the current sum total of human experience will be a fraction of the total experience we will have had after observing enough rounds of the game.
Do you go so far as to say that? If so, on what basis? How would you define this metaphysical Kantian causation in a way that rules out backwards causation without begging the question?
I explained above why I don’t think that I need to rule out determinism (assuming that I understand what you mean by that). As for your requirement of “controlled circumstances”, suppose that I was able to choose what any sample of the prior players selected in their rounds, and that I still observed perfect correlation between the predictor’s prediction and the player’s selection. I don’t think this modification changes the nature of Newcomb’s paradox significantly. Would you agree that I would then have “controlled circumstances”?
Look, I’ll make this simple for you.
If backwards causation explains the predictor’s accuracy, my analysis of the Newcomb problem is wrong, and yours is right. If the predictor is making the prediction by looking into the future or by some other supernatural means, whereby your current actions can have a causal impact on the predictor’s past behaviour, then you should absolutely choose only the one box. However, the problem becomes completely uninteresting then because it’s now a straightforward choice between a million bucks and a thousand bucks, and the paradox (i.e., is it sometimes rational to behave irrationally) doesn’t arise.
However, perfect prediction in itself is not evidence for backwards causation, as there are alternative explanations for perfect prediction (such as determinism) that do not contradict virtually all of human experience.
Observed perfect prediction of the sort in Newcomb’s paradox would be evidence of backwards causation as I defined it in post #85. That would be the case in the absence of any speculation about how the prediction is being accomplished, whether it be supernatural prescience, time travel, or Laplacian-demon style computation.
Now I gather that you do not accept my definition. That’s okay. It’s just a definition, to be accepted only if you find it convenient. However, since you yourself use the word “cause”, you must understand the word to mean something. Would you be willing to say a few words about what that meaning is, or point me to where I can read about it in the philosophical literature?
If you are willing to discuss it, would you explain what your evidence conditions are for inferring when causation is happening? In particular, how do you distinguish it from mere correlation? Do you take “cause” to refer to something that cannot, in principle, work backwards in time? If not, what hypothetical observation could serve as evidence for backwards causation?
Your criterion is “I cause a D-event if I willfully perform an action A such that, in all relevantly similar situations (real or merely possible) in which A is performed, a D-event occurs.”
First, we have to ditch the “willfully” bit, because you surely cannot be restricting causation to conscious agents. Causation has been going on since the birth of the universe, and did not only get going after life evolved.
I’m not quite sure how to read the “relevantly similar situations” clause, and your “it means whatever I want it to mean” bit appears to give you enough wiggle room to drive a truck through.
If I disregard the clause, you’re merely stipulating perfect correlation - there are no instances in which A occurs but the D-event does not. In addition to the obvious problem where there is correlation due to a common cause for both events (your theory has each of the effects causing the other), since you’re allowing the possibility of backwards causation, anywhere you have an event X which causes the subsequent event Y, according to your definition not only does X cause Y, but Y causes X as well. For example, in all cases in which the light in my bathroom has come on, the circuit leading to it has been closed. I would say that closing the circuit causes the light to come on, but you would seem to have to say in addition that the light coming on causes the circuit to close, since there are no cases of the light coming on in which the circuit wasn’t closed.
But perhaps your “relevantly similar situations” clause is supposed to rule those situations out. Offhand I’m not sure how it can. The lit bulb is invariably preceded by a closed circuit, would seem to imply that the bulb backwardly causes the circuit closure.
So anyways, if I’m reading your proposal correctly, then yes perfect correlation implies backwards causation per your definition, because perfect correlation is what you’re stipulating to be backwards causation. If you intend for there to be more to it, I’m not getting it.
Now, on the general subject of causality in philosophical literature…errr… kind of a large subject. You’ve got your Lewisian counterfactual school of thought - A causes B if it is the case that had A not occured, B would not have occurred. You’ve got your probabilistic school of thought - A causes B if A makes B more likely to occur. Most writing on causation simply assumes it’s forwards-only. Hume and Kant both built it right into their definitions.
Here’s some entries in the Stanford Encyclopedia of Philosophy to get you going:
Counterfactual theories of causation
Probabilistic theories of causation
Backwards causation
This is a good point. The exact wording Tyrrell gave may perhaps outline not causation in general, but willful causation specifically.
Huh? Surely someone as well acquainted with the philosophical literature as you are can see that Tyrrell’s proposal wasn’t symmetric. (Note that “There are no instances in which A occurs but the D-event does not” is not symmetric, in precisely the same way that the material conditional “A implies D” is not symmetric). For example, it may be the case that a certain button, if pressed, always results in a door opening, but the door can also be opened manually. In this case, we would have that pressing the button causes the door to open, but we would not have that the door’s opening causes the button to be pressed, since there are situations where the door opens without a corresponding button press.
Though, of course, it’s not explicitly built into the two definitions you just outlined. P(there is a fire at time T) < P(there is a fire at time T | there is smoke at time T+1), say. By your second definition, would this not tell us that smoke causes fire in a “backwards” manner?
Of course. Which is why I didn’t choose as my example an effect which admits of multiple causes. If I had said my light only comes on when the switch is flipped, you might have said, “Aha! But what if someone removes the bulb and applies voltage directly? Or what if someone bypasses the switch with a jumper?” But you’ll note I said that the light only comes on when the circuit leading to it is closed - there are no alternate causes for an electric light coming on.
You’re right that effects with many possible causes won’t “cause” their causes under Tyrrell’s definition, but any effect with only one possible cause will.
If I choose both boxes, or if you choose one box, then the predictor did not analyze our profiles correctly and made a mistake, or perhaps the universe is not fully deterministic and he cannot be 100% accurate. No backwards causation, no paradox.
To anyone who thinks choosing one box is the obvious answer (under any set up), here’s another way to think about it. I have to make the choice, and have no way of seeing what is in the boxes. But you do know what is in them, because you were there when the predictor set them up. Assume that you want me to get the most money. What do you advise me to do?
I think you’re confused about what the paradox is. It is not the sort of paradox that happens in science fiction when a character kills his own grandfather: the paradox is that the rational choice is not (apparently) the best choice, and this is true whether or not the predictor makes a mistake.
An observer communicates the outcome of the prediction, whatever it is. You might as well get rid of the observer and make the boxes transparent if you’re going to allow the chooser to know the prediction ahead of time, because this changes the problem. In fact, I believe it renders the problem nonsensical. The chooser will always take both boxes if he can see exactly what is or is not in them. The predictor in this case is forced to place the money in such a way that contradicts his own prediction, which is plainly stupid. Instead of a perfect predictor, you have a perfect idiot.
I say the choice of taking both boxes is not the best choice - only the safe choice, the choice that guarantees you second place, as it were. You say that’s the rational choice, I don’t.
In Alan Smithee’s proposal, I don’t think you tell him what’s in the box, merely what you think he should do.. And natuarally, if you have his best interests at heart you’ll always tell him to take both boxes.
Although, of course, in that case the Being, knowing that your friend had a peek, would never put $1m in the second box so the whole exercise would be pointless.
Does the predictor know you saw where he put the money? Maybe you were hiding. Maybe after he left, you invented an x-ray-type device that can penetrate the boxes. Maybe he predicted all this and maybe he didn’t. None of this forces him to contradict his own prediction–he can always put in only the $1000.
The point is that someone with knowledge of the situation will ALWAYS advise choosing both boxes. Why would you act differently out of ignorance?
Suppose you were getting ready open just the second box. Suddenly I run up and say, “Wait! I know what is in the boxes! You should open both!” Do you believe me? Do you change your mind and open both boxes? Here’s the thing–whether I’m telling the truth that I saw what is in the boxes, I’m right that you will make more money by opening both boxes. Unless you think that the money inside exists in some Schrodinger’s cat quantum indeterminacy until it is seen by someone other that the predictor, nothing changes because I saw it. So if I showed you an x-ray of the second box with a million dollars in it, would you still turn down the thousand you know is there? Would the million disappear? What if I showed you an x-ray of the second box empty? Would the million dollars appear out of nothing if you ignore me? Or do you think something about having a predictor who has always been right in the past (all the problem stipulates) makes this situation impossible?
What if the predictor has done this experiment millions and millions of times and has been wrong twice? Would you suddenly change your mind if I ran up and told you he had made mistakes before? He’s still got an almost perfect record.
Suppose we abandon the predictor entirely, and just focus on the boxes. Imagine that boxes frequently appear in some location, and that time and time again, without fail, when both boxes are opened, the sum amount of money in the two is small, while when merely one box is opened, the amount of money in it is very large. The mechanism by which this happens is perhaps unknown, but this observation has continued to hold over a great many repeated trials, with no counterexamples.
Would it not be fair to, after a sufficiently large number of such observations, in the usual manner of scientific induction, conclude that there is a law of the universe, the fine details perhaps not yet established but the overall results clear, that opening two boxes results in very little money but opening one box results in quite a lot? And to, therefore, decide that the best choice, indeed, even the rational choice in this situation, would be to open only a single box?
If no amount of such observations could establish such a scientific law, why not? What causes this situation to differ from the establishment of any other scientific result (including a great many previously anti-intuitive ones)?
If we think of the contents of the boxes as indeterminate-until-opened, then there should be no problem theorizing a law like the one you describe. But if its supposed to be that the contents of the boxes are determined from the moment the boxes appear, then postulating a law like the one you describe incurs all the same problems we’ve already been talking about concerning backward causation.
Of course I’m not so convinced there’s anything terribly wrong with backward causation.
Still, in a case like the one you’ve described, we might as well say that the contents of the boxes causes the choice, rather than the other way around. So we can avoid backward causation in this example, though at the apparent cost of the chooser’s free will.
But of course that makes the example importantly disanalogous to the original Newcomb scenario, since the alien isn’t supposed to be causing my choice but only predicting it.
So it looks like if we want to keep your example analogous to the original, it needs to include an element of backward causation. But then, as several others have remarked, if we allow for backward causation as the mechanism by which the alien makes his prediction, then of course we should pick only one box. The problem arises, rather, when what’s involved is not backward causation but simply an uncanny but perfectly natural predictive power. And predictive powers are exactly what you were hoping to abstract out of the scenario. Looks like they may be essential to the scenario, though.
-FrL-
This is nicely put. I’d respond to you in this situation by saying “Right now, because I fully intend to take only box B, I’m almost completely certain that what you saw was a million in box B and a thousand in box A. But I know that if I were to pick up both boxes, I would then–with good reason!–become certain, instead, that there was no money in box B after all. Meanwhile, if I only pick up box B, I’ll continue–with good reason!–to be certain there is a million in Box B. I’d rather there be good, sound reasons to think there’s a million in Box B than that there be good, sound reasons to think there’s nothing in Box B, since whatever I do next, I’m certainly going to be walking away with the contents (or lack thereof) of box B. So, to maintain my situation as one in which there are good reasons to think there are a million bucks in Box B, I hereby choose only box B.”
Weird, maybe? But I think basically right.
-FrL-