I’ll take both; I can always use an extra thousand bucks.
I think that is exactly what a paradox is–it’s an artifact of language and logic systems–and the problematic thing is that people (like me) who watch Star Trek are used to thinking of a paradox as what happens when you travel back in time and kill someone. It isn’t. If you travel back in time and kill someone, either history changes or it doesn’t or time travel isn’t possible to begin with or the universe splits in two or something else, but whatever happens happens and can happen–no paradox. A paradox is what happens when you think about traveling back in time and killing someone and try to figure out what would happen and realize that your premises lead to two or more mutually contradictory conclusions.
begbert2, did we cross post? I already explained why the arguments aren’t about whether the predictor is accurate or not.
Sigh…back to work.
Okay then, what’s the point of contention in all those arguments? Whether the chooser thinks that the predictor is necessarily perfect?
Myself, I think the problem is deliberately phrased to increase the ‘threat’ of the million being gone or ‘disappearing’ if you should be so greedy as to take the second box. (The whole thing really sounds like a frikking parable for sin and salvation, frankly - which is probably not coincidental.) So, consider my ‘extension’ of the paradox, and tell me if it changes the situation any, either logically or regarding probable answers:
As you sit there deliberating on the two boxes, a good buddy of yours comes in the room, and gathers up both of the boxes. “I’m going to make sure there’s no funny business going on here,” he assures you.
He moves to stand behind you, and empties the boxes. (He drops the boxes and you see the empty boxes fall into your field of vision.)
“Okay buddy, I’ve got all the cash right here in my hand now. None if it’s going anywhere. I’m keeping a close eye and a tight grip on it.”
“Oh, and did you want me to throw away $1000 of this?”
(And Alan Smithee, I think not only did we cross post, I have no idea what your ultimate point is. Unless it’s that the scenario posited is one where rationality is ‘broken’ and thus wearing funny hats and walking backwards is now “in”, in which case I’m back to pointing at the concept of the “accurate predictor” and blaming it for any such breakage.)
What did I just tell you? No, that is not the point of contention.
That’s similar to Alan’s restatement of the problem above, so I’ll fashion my answer to you after my answer to himl.
First of all, however, I should note that your extension does change the problem, in that it shows that the alien’s prediction was false. He predicted either that I would choose one box or both boxes, and it turns out I never get a chance to choose at all.
But let’s say that somehow, the alien understands (and I understand the alien to understand) my having the friend discard the $1000 as being the same thing as choosing just one box.
In that case, my answer is that I would tell my friend to discard $1000–specifically the very $1000 he got out of box A. Because by doing so, I ensure that I will continue to live in a world in which there are sound reasons for thinking there were a million dollars in box B. The second I take the money without having the $1000 discarded first, I come to realize I live in a world in which there are sound reasons for thinking there were not a million dollars in Box B.
I would rather live in a world in which there are sound reasons for thinking there are a million dollars in box B–and that I get those million dollars–than in a world in which there are sound reasons for thinking there is no money in box B–and that I only get a thousand dollars.
I can do something to determine which of those worlds I live in.
-FrL-
I don’t see this as an argument against omniscience; I see it as forcing the Predictor to take a bad bet. Here’s another one:
- A cat is fasting all day long (hey, it’s Ramadan, isn’t it?).
- At the end of the day, up to three morsels of food are placed in the cat’s cage.
- If the Predictor predicts the cat will eat nothing, the cat is given one morsel of food.
- If the Predictor predicts the cat will eat one morsel, two morsels are given.
- If the Predictor predicts the cat will eat two morsels, three morsels are given.
But the cat is hungry and is going to eat everything that’s put in front of it, right? So the Predictor can never be right. As stupid as my example is, I don’t see how the version you posted is much better. People are going to grab as much money as is in front of them if nothing can stop them. The Predictor has agreed to a game that he can’t win.
On the other hand, the original paragraph relies on the chooser not being able to see what is in the boxes. I think the paradox and its resolution rely upon discerning the two contradictory interpretations of what is going on that are operating simultaneously in the chooser’s head.
-
Raw physical model. What’s in the boxes is in the boxes. I can only grab what’s there, and no one can stop me from grabbing what’s there. Whether the being is really omniscient or not is irrelevant, and I won’t even consider it.
-
Omniscience is relevant model. I ought to let the Predictor’s prediction in the past influence my present.
I don’t think the problem is interesting if we merely suppose that the Predictor is a good computer or something else that predicts well but is theoretically fallible. Assuming that there is no theoretical backwards causality, then I think the chooser should choose both boxes.
If we suppose true omniscience, then here’s the key to unraveling this. What is difference, in effect, from a force taking away the $1M as I grab both boxes than the money never being there in the first place? I saw that there is none unless someone has observed the $1M in the box before the chooser chooses. But this ruins the premise, as the chooser could take both boxes and prove the Predictor wrong or pick only the $1M box and come up empty–and prove the Predictor wrong. Or the money could really just disappear post observation and it really would be a an issue of power, not knowledge.
- If there is observation of the contents beforehand, then there is no paradox because the Predictor could be proven wrong. But if there is no observation of the contents beforehand, then there is no difference in effect from a display of omniscience and a display of omnipotence. In that case, there really is no paradox because there is no difference (or way to tell) between backward causality (the Predictor predicted it) and current causality (the Predictor is adjusting the money in the box in real time).
And I do think that takes care of the paradox. You are merely taking somebody’s word that the contents aren’t being manipulated.
By the way, here’s another “extension” of the problem which I think should convince you to take just one box.
Say you have seen this alien play the same game with lots of other people. You have noticed that people who only take one box tend to get a million, and people who take both tend to get only a thousand.*
Don’t you think, then, that when confronted with the same situation yourself, you are more likely to get a million if you only take one box? And doesn’t that mean you should only take one box?
-FrL-
How it this a restatement of the problem? It sounds unaltered.
And, to be very precise, the alien always gives you both boxes. You merely have the option to choose to leave one behind on the table.
If you choose to live in a mental world where box A is a heisenburg uncertainty box, where your choice makes the million appear or disappear, that’s your problem. (The situation explicitly states that this is not the case, though. In the problem, the money’s either there or it ain’t. At the time you make the choice, your choice is stated to not have an effect on the contents of the boxes.)
The fact that there is a strong correlation between one’s choice and the amount of money placed in the boxes is just as good as saying that one’s choice has an effect on the amount of money placed in the boxes. (If you have faith in the predictor, then you have faith in such a correlation). And this does not contradict the fact that the money’s either there or it ain’t, which is as true and uninformative as saying that you either choose to take both boxes or you don’t.
You should want the situation that has the greatest likelihood of winning the big bucks. And if you have strong reason to believe that the predictor is very good at predicting, then that situation will be the one where you only take one box.
First of all, if I picked the second box only and there ended up not being anything in it, I would punch the predictor square in the nose. Second, if I picked both boxes and ended with only 1K, I wouldn’t curse my choice. I would curse the fact that I am so predictable, that despite my debating the issue for 3 hours, the predictor was able to see right through me, guess what I was going to do, and withhold the million bucks.
Since I prefer punching people to thinking of myself as easily predictable, I would chose to open only the second box.
It’s altered because the original doesn’t say that people who choose both boxes generally get a thousand, and those who choose one generally get a million. That is an implication of the problem as stated (I think*) but it is not stated in the problem.
Your reply doesn’t seem to engage my position. Let me ask you this: Do you think it is true that people who choose one box generally get a million, and people who choose two boxes generally get a thousand?
If the answer is “No,” then why not?
If the answer is “Yes,” then do you think you yourself are more likely to get a million by choosing one box and more likely to get only a thousand by taking both boxes?
If the answer to that is “No,” then why not, given that you’ve already acknowledged that, in general, people who take one box get more than people who take two boxes.
If the answer is instead “Yes,” then do you think you get more money by taking one box than you do by taking two boxes?
If the answer to that is “No,” then why not, given that you’ve already acknowledged that you are more likely to get a million dollars if you take only one box?
If the answer to that is “Yes” then do you think the best strategy is to take one box?
If the answer here is “no” then why not, given that you’ve aknowledged that by taking one box you probably will get more money,
If the answer is, instead, “Yes” then put BegBert2 back on the line.
-FrL-
*I suspect it’s not valid to go from “The predictor is almost always right” to “What I do, the predictor almost certainly predicted.” This may depend on how many “one boxers” and how many “two boxers” there are in the world and on other factors besides. I’m not sure. But I don’t think it’s changing the spirit of the problem if we simply stipulate that “Whatever you do, the predictor almost certainly predicted you would do it.” I’d like to see a little java applet or something which illustrates this problem over a number of iterations.
To the whole “the money’s either there, or it isn’t” line of thought:
When I was in high school, I wanted to go to Stanford. But I didn’t want to do any homework. The thought occurred to me “Well, let’s see. Either ‘I will go to Stanford when I graduate’ is true or it is false. If it is true, then, hey, great, I don’t need to keep doing my homework, since it won’t matter; I’m going to be going to Stanford regardless. And if it is false, then, ok, I don’t need to keep doing my homework, since what’s the point?; I’m not going to be going to Stanford regardless. So, even though I don’t know whether it’s true or false, I know that, no matter what, there’s no need for me to keep doing my homework”.
Now, anybody can see that there is something silly with this kind of reasoning, and that something silly is caught up in the correlation between the present act of doing homework and the future act of going to the college of one’s choice. But why, all of a sudden, should this kind of reasoning become acceptable, even unimpeachable, when it’s merely flipped in time, so that the unknowns go from being unknowns about the future to unknowns about the past? I don’t see good grounds for symmetry-breaking here.
So, yes, the money was either placed there or it wasn’t, though we don’t know which. That is a necessarily true statement, an application of the law of the excluded middle. But it’s not any more useful to us than was the fact that, yes, either I will go to Stanford or I won’t, though I don’t know which.
I just can’t stay away from this thread. Is this Newcomb still alive so I can blame him when I get fired for spending all day on this?
My point is that you’ve made a perfectly flawless and rational argument for taking both boxes, but that everyone who has made that argument has (per the set-up) lost out on a million dollars. So rationality does seem to be broken in this scenario, but it doesn’t require a perfect predictor, and nothing about the situation as presented is physically or logically impossible. I don’t see how you can blame this on the concept of an accurate predictor unless you think there is something wrong with the concept. (What? Why would it be impossible to make mostly accurate predictions about what people would do in this situation?) So we have a potentially real situation in which rationality seems not to work. That’s why it’s a paradox.
ETA: Nice summary of the 1-box argument, Frylock!
Yes, I concur.
Unless $1000.00 is a really really big deal to you, and I mean really big deal, who would pass up a shot at a million for a measly grand? I would think 25-50K should be the minimum amount is the first box to make it interesting?
But you aren’t passing up your shot at a million bucks. If there’s a million bucks in the first box, you’ll get it when you take both boxes.
Actually, it’s quite important if the money’s there or it ain’t. You see, the dependency relationship here is that the predictor’s accuracy depends on the contents of the box, not the contents of the box depending on his accuracy.
You’re going to pick either one or both boxes, apparently depending on whether you’re into faith or certainty. (Faith tells you that foregoing sin will get you heav- -um, that foregoing box B will earn you $999,000, and certainty tells you that A+1000 > A.) Supposing box A is full, if you leave B, then the predictor will have been right. If you take both…the predictor will be wrong. The box will not hastily empty itself. Similarly, supposing A is empty. If you take both, the predictor will have been right. If not, and you only take A…You get nothing. The box will not fill itself to make the predictor correct.
The thing to remember is that the predictor, however accurate he’s been before, the predictor’s accuracy is the malleable element here. The contents of the box are not. You can’t change the contents of the box by your choice. At least, not according to the problem as stated.
The predictor has done his work and is gone. The money’s on the table. Leaving it there is not the situation with the biggest likelihood of getting you more money.
It stuns me that you can ask this question in a serious manner. Causality doesn’t reverse symmetrically. You’re talking about whether studying now can alter whether or not you previously graduated. While other people are standing around admiring your diploma. If you decide today not to study, does your diploma go poof and vanish into thin air? (Keep in mind always that in the scenario under discussion, it’s stated that the diploma does not vanish into thin air.)
Sorry, pal. You can’t change the past. Whether to cause your diploma to disapper, or to cause a million dollars to disappear. That’s that.
Well, of course, I’ve never had any good reason to believe in a strong correlation between my getting a diploma at one point and studying at a later point.
When you say causality doesn’t reverse symmetrically, are you talking about an a priori asymmetry in your beliefs on causality, or one that you came to on the basis of empirical knowledge? If the latter, could any amount of observations cause you to change your position here?
I would basically be interested in seeing your responses to Frylock’s last post. But, also, let me pose another rewording of the same old scenario.
Let us suppose that you aren’t actually playing the boxes game just yet, but, rather, are sitting with the alien predictor in his booth as a stream of other people come in to play. You can see what the predictor puts in the boxes at the time he does so, but the players of the game can’t. You watch a million billion gajillion of these iterations, and without fail, every time you see the predictor put the big bucks in, the player subsequently only takes one box, while every time the you see the predictor leave the big bucks out, the player subsequently takes both boxes. Million kachillion hobarillion. You’ve seen more experimental confirmation of “Players who pick one box get the big bucks and players who pick both don’t” than you’ve seen of Newton’s laws of motion, the sun rising in the east, apple pie tasting delicious, anything you’ve ever supported by inductive reasoning. Is there any point at which you’d be led to say “Shit, if I play this game, it would be best if I took just the one box”?
And if not, why the particular reluctance to apply inductive reasoning principles in this case but not in general? What in particular is specifically blocking them here?
I’m not talking about changing the past, any more than changing the future, in the incoherent sense of “At first, at time T, things were like this, but then, at time T, things were like that”. I’m just talking about unknowns which I understand to be strongly correlated with my actions, and the desirability of those actions which are most strongly associated with the unknowns having the value I desire.
Indistinguishable, why would that not be stronger evidence that putting the money in both boxes causes people to pick one box? Surely that’s a more parsimonious explanation than backwards causality. I’m not even sure that backwards causality makes sense–I think temporal order may be built into the concept of causality. After all, we generally distinguish causality from correlation in science by measuring the predictive ability of a hypothesis. Obviously that doesn’t work with backwards causality. And I can’t think of any situation that would provide more support for backwards causality than for some alternative explanation. Normal causality is easily testable and easily falsifiable within the common framework of assumptions. Backwards causality, if it can even be defined sensibly, seems to be untestable and unfalsifiable.
Well, I would personally say there’s no difference between viewing a particular correlation as a “forward” or a “backward” causation; in every case where we could say that X causes Y, we could just as well say Not Y causes Not X, turning a backward causation into a forward one or vice versa. It’s the same correlation under discussion; I don’t care how you frame it. I don’t care whether we say the alien’s predictions cause your actions or your actions cause his predictions or your actions cause the monetary reveal to come out a certain way and this reveal causes his predictions or whatever. Those aren’t meaningful differences to me. All that matters to me is that there is a correlation; I don’t think there’s anything more useful to say beyond that.
Yes, yes, I know, “correlation isn’t causation”. But what I mean is, correlation that we expect to hold at all relevantly similar scenarios is causation, or at least, functionally indistinguishable from it. This is what the business about predictive power is about; testing predictions is just testing that the correlation continues to hold in more and more relevantly similar situations.
If you want, ignore everything I said above. Let’s talk predictive power. Suppose you play the game with the alien hobarillion many times, sometimes picking both boxes and sometimes picking just one, for the sake of science. As it happens, just to assuage your paranoia, you have a friend in a booth watching the alien as he locks in his choices, making sure there’s no funny business going down. Every time you pick both boxes, it turns out you get the small prize, while every time you pick just one, it turns out you get the big prize. Every single time. After a while, you realize “Hey, this theory seems to have some predictive power… I have a really strong ability to predict what prize I will win on the basis of what choice I made, by that simple rule.” Would it not be fair to say, then, after enough confirmations and no refutations, that picking one box causes you to win lots of money and picking two boxes causes you to get not that much money?
Actually, it would probably be best to do this the other way around.
You play the game a hobarillion times, and notice the predictive power of the rule “Pick one box, get lots. Pick two, get little”. But you have no friend watching the predictor; in fact, you don’t even realize there is a predictor presetting up the boxes. You’re just being repeatedly presented with boxes for reasons unknown to you, playing this crazy game, and you notice the predictive powers of this rule. You have no idea at all what’s going on behind the scenes. Surely, in this situation, you’re willing to say “Yeah, picking one box is the smart thing to do. It causes me to win lots of money.”
What then happens, if you later learn that there was a predictor presetting up the boxes, from a trustworthy friend who just happened to have been watching him and didn’t see any funny business going down? On these new grounds, do you suddenly throw out your previously formulated theory, despite all the gobs of evidence that led you to adopt it in the first place?