Well, yes, that’s the one-box argument, which Frylock already expressed quite well. You’ve also expressed it well, and brought out what I think is a salient feature of the paradox: that it can be understood as a conflict between deductive and inductive reasoning.
The point I was making is that the whole issue of causality is a red herring. The paradox works even without backwards causality, and nothing about the paradox as presented logically implies the existence of backwards causality. Seems like you agree! (And people say nothing ever gets resolved in Great Debates!)
In terms of the deductive vs. inductive reasoning thing, I want to say that I don’t see the deductive dominance argument in this case to be any more compelling than that in the Stanford example I posted at the top of this page; it seems to me that both deductive arguments have the same form (unless there are enthymematic premises I have failed to account for, in which case they should be brought out and justified), and since the validity of a deductive argument is dependent on its form alone, they must therefore be both equally valid or invalid.
Correct me if I’m wrong, but I don’t believe the term “rational” in game and economic theory means the same thing as “best” choice or “logical” choice for humans in real situations.
There was a recent example in Scientific American (Travelers Dilemma) where game theory said one thing, but humans performed much better. So I don’t think the term “rational” is all that useful when comparing to what might or might not be the obvious choice for a human (and, obviously the obvious choice is not the same for each human because different humans favor different variables, safety vs risk/reward, etc.).
My choice on this one:
Pretend like I’m just going to grab box 2, but at the last second reach out and grab box 1 also.
If Stanford sent a letter to your school tomorrow telling them whether they would or would not accept you, and they promised to keep to this decision regardless of your final exam results, then you doing your homework is clearly no longer relevant, even though you don’t yet know whether you’re going to Stanford or not. Go and play some Halo instead. Same with the boxes. I don’t think anybody is likely to convince each other at this stage, but as long as what is in the boxes cannot be altered by my actions, (a cornerstone of the original problem) then I’ll take both boxes as it guarantees me $1000 more than taking one box.
It’s a cool paradox. I think what makes it tough is the fact that the predictor is not omniscient by definition in the paradox, yet he effectively is. His predictive power is such that it implies “backward causality,” as it was previously described.
He is a predictor so accurate (based on prior trials) that we presume he can detect whether I will pick one or two for any reason at all, whether I do so after careful consideration, if I do so on a last-second change of heart, if I make my selection only because I “irrationally” decide it best guarantees the $1M, if I make my choice for any reason at all and through any mental process I employ–the predictor assesses the vibrations so accurately that he has yet to make a wrong prediction. Given that, he is effectively omniscient. Given that, we’d be inclined to pick the one box, assuming that doing so is the culmination of an inexorable progression toward picking one box that the predictor detected, as precisely as every time before.
That’s how I reduce the paradox to something workable in my head, but I’m not comfortable that’s right, since I know that if the predictor is not omniscient, and nothing I do changes what is in the boxes at that moment, it is absolutely logical to take the two boxes. This is indisputable. Yet, for all practical purposes, he IS omniscient in his ability to assess the vibes that lead to a selection, in which case I should pick one box–he’d know I was going to, no matter the reason for my doing so. If I “logically” choose two boxes, because I know my choice can’t change what’s in the boxes, he’d know I was going to do that too.
It is effectively a “backward causality,” though I suppose it might more accurately be described as the illusion of free will–for all my soul searching and anxiety over my choice, in reality it’s something of a mirage if our alien friend is so accurate. We all apparently “act” with the precision of chemical equations, at least within this paradox. So I’d pick the one box (or at least it would appear I was making a choice ;)).
You take Box 2. Because if he predicted wrong, you have now become a celebrity in your own right for Taking Down The Man and you can go on Oprah and cash in.
Interesting read. I suppose framing it so clearly epistemically, it leads directly into the (no doubt already considered somewhere) problem: suppose a guy asks you a Yes or No question, you give an answer, he demonstrates that your answer was wrong, and this continues anew with another question, for a million rounds, all going the same way. At some point, would you be inductively compelled into a bizarre (I want to say incoherent, though maybe it’s possible to somehow maintain otherwise) state of “Whatever I believe about the question he just asked is false”?
To clarify, what I think would make such a situation interesting is when the answers to the Yes/No questions are all incapable of varying in a manner causally related to your beliefs about them (for example, if they were all math questions (and thus, on any conventional account, their answers would be fixed/determinate), and every time you mentioned your belief about one, you were presented with a proof of the opposite).
Is that such a strange situation? It seems like you would start to be incapable of reaching a conclusion. You would lack a belief that Yes is the answer, and you would lack a belief that No is the answer. It’s like if you told me you had just flipped a coin, and you asked me whether I believed that it came up heads. Since I have no way of gathering evidence that tips the scales in either direction, I simply don’t come to any belief on the matter.
Well, supposing we create a situation with absolutely no fixed point. Every time the guy asks you “Do you believe this is true?”, if you say “Yes, I do believe it”, he shows you that it’s false, and if you say “No, I don’t believe it (either because I believe the opposite instead, or because I have no belief on the matter)”, he shows you that it’s true.
Then, it would seem, we come to a situation where you are compelled towards accepting “The statement the guy just asked me about, it’s very probably the case that it’s true if and only if I don’t believe it’s true”. If you have positive and negative introspection, being aware of both what you do believe and what you don’t believe, this would be bizarre, wouldn’t it?
That is more interesting. But I think the scenario needs to be fleshed out more to see if there is really something paradoxical here. Whatever happens, I would expect that my mental state will be oscillating rapidly as I ponder the question. So it seems that we have to fix a duration T and say that my mental state at the conclusion of T is what correlates with the answer.
If that’s the case, then I would expect that I would just oscillate randomly until T ends. Then I will recall what my belief was at the conclusion of T and expect the corresponding answer.
Sorry for the uber-long delay in replying. I don’t get the opportunity to be on here much at the moment, so I’ll try to pose few positions which I might be expected to defend later, and no questions (as I may not read the answers).
Empirical knowledge; accumulated ever since I learned that things don’t go away just because you close your eyes. (This probably occured before I was the age of 1.) If there isn’t money in the box, then making a decision about the contents of the boxes isn’t going to change that. That’s how the world works, in my observation.
Reverse causation breaks the rule of object persistence (among others), and that is a rule which I have observed to be perfectly consistent my entire life.
The post of Frylock’s to which you refer was nonsensical garbage; he essentially slammed out a series of yes/no questions, on the nominal pretense that if you failed to answer them all precisely the way he imagined you would (not that it was always clear what the right answer was supposed to be), then he would supposedly automatically be proven right in everything, based on presumed but absent logical leaps connecting the questions. It was basically Argumentum Ad Lambasting and was a senseless load of crap.
Honestly? I’d think it was a scam, a trick, a setup. I’d see boxes being filled or left empty, and hordes of choosers “randomly” managing to make their pick in just such a way as to validate the impossible powers of the predictor. Now, since I’d know (or think I knew) that the boxes aren’t changing their contents to accomodate the choice, I’d know that either 1) the choices were independent of the box contents, and I’d just seen a spectacular demonstration that any random pattern is in fact possible, or 2) that the box contents are instead forcing the choosers to make the prescripted choice, either through magical booga booga powers, or something more secular, like the choosers being told in advance of coming into my sight what to choose, with a payoff of >$1000 to encourage them to do so.
Given that I’d think it was a scam, I’d assume that if I ever stepped up to the plate, that there would be no million in the first box. (I would also be keeping an eye out for hidden costs or fees.) In either case, it would be better to take both boxes in front of me (unless I was paid even better not to, of course).
What’s to induct? There’s no evidence that the setup isn’t a scam. And that’s much more likely than the notion that a decision made later can alter the present. Now, maybe if I believed that stage magic was real and that impossible things happened regularly, then I might buy into it. However, I don’t.
If I have the ability to choose either option, then you ARE talking about changing the past; if not, then I could make the other choice and prove the predictor wrong. If that’s not possible, then I don’t have a choice.
Correlations happen because free choice isn’t occuring. They say that causation doesn’t imply correlation (between the things being correlated), but when it doesn’t, it instead implies causation between both things and some other, shared, prior cause. There is no backwards causation. There is no correlation without cause. (Coincidental confulences aren’t considered correlations.) There are how things work in, you know, reality. Exceptions and reversals do not occur.
Now, this doesn’t actually rule out the scenario presented. One way for this scenario to actually occur would be for the entire universe to be deterministic, and for the predictor to be running a complete separate simulation of the universe on a separate, faster processor, and thereby being able to view the results earlier. (This would assume that he pre-plans his interactions with the world so as not to alter the results of the prediction, but the scenario as presented certainly allows this.) The two correlated events of the prediction and the box selection are caused by a prior event (the prior state of the world before the prediction), so the correlation is possible.
The thing about this situation, though, is that you don’t really have a choice. Your action is predetermined, regardless of the mechanism of your decision making process (if any - this would work with coin flips too).
So, once we get past the foolish “I believe in fairies, clap clap” stuff, I suppose it comes down to whether you believe that you’re deterministic or not. Or alternatively, whether your deterministic state forces you to believe you’re deterministic or not. Either way.
(If the universe is deterministic, then I’ll still choose both boxes, and have no reason to regret choosing both, since I didn’t have a choice, and more to the point, by the time I sat down, box A was empty anyway. Choosing it wouldn’t have filled it. (You might not have noticed, but I’m a person who would take both boxes without regret, and continue not to regret it when I saw that Box A was empty (though I might be a little annoyed at the cheapo “predictor”). Even so though, no regrets; if I’d left box B behind in that case, I’d have gotten nothing!))
What you essentially have here is a scenario where there are two or three boxes, and you may pick only only one, and whichever one you didn’t pick has the prize in it. Note that if there are three boxes, this is called a “shell game”. In this case, of course, the answer is obvious: whether you think it’s real magic or a scam, never play.
Would you theoretically, under any huge amount of new observations, ever change these beliefs on how the world works?
Why would reverse causation break the rule of object persistence? For example, let’s say pressing a button today causes an object to be placed somewhere yesterday, and that this placing wouldn’t occur otherwise. Either you press the button today and it turns out the object was placed in the box yesterday and has remained there till now, or you don’t press the button today and it turns out no object was placed in the box yesterday. Either way, there’s object persistence; it’s not like the object was in the box at one time and then, the next second, was not in the box.
Of course, I want to say, if there’s perfect correlation, then there’s no difference between “forward” and “reverse” causation; we might just as well say that the object’s being in the box causes you to press the button. All that matters is the statistical fact of the nature of the correlation between the truths of the two propositions “The object was placed in the box yesterday” and “A man pressed the button today”.
Well, give as long and thought-out responses to them as you feel necessary to properly explain your position, then. I don’t think he was demanding that you give the “right” answers; he wanted to see exactly at what point your views departed from his, so he could better understand the source of and justification for that departure.
You have some quantity of inductive evidence telling you the predictor’s powers are unlikely to be possible. Could ANY quantity of inductive evidence, however mindbogglingly large, ever get you to say “Hm, maybe it is possible for someone to be just that good at predicting”?
Or, maybe, you can eventually be convinced that there is in fact some magical booga booga going on. That it is in fact the case that what’s locked into the boxes actually compels you to make your decision in a certain way. However, when you step up to play, you don’t feel any force exerting upon you. You feel the same way you ever do when you’re asked to make a choice. Would you prefer if you ended up choosing to take two boxes or if you ended up choosing to take just the one?
Can any quantity of any number of any experiments of any kind ever get you to think this isn’t a scam?
When I normally make a choice, do I “change” the future? I would say no. It’s not as though the statement “The future will be like X” is true at one time and then later becomes false. The truth of that statement is invariant; it’s just that its value may be unknown to us at certain times. The situation with the past is symmetric.
As a compatibilist, I want to ask why should correlation destroy free choice? You want to say the accuracy of the predictor destroys free choice, because there is a correlation between what he said in the past and what you choose in the present. Ok… but what about a postdictor? Someone whose sayings in the future correlate with what I choose in the present. Does his existence destroy my free choice too?
How about the fact that I know my friend Todd is highly likely to buy strawberry ice cream on hot days? Does this correlation destroy free choice? Certainly, if I were to ask Todd, he’d say “I’m buying the ice cream of my own volition; I’m buying it because I want to. However, I admit there may be a correlation between external factors and what I want.”
How about if I were to observe that two people of my acquaintance, who’ve never met each other, invariably make the same choice of what to eat for breakfast, each and every day? Does this correlation mean only one of them has free choice? That neither of them have free choice? I think this is a silly view of what choice is.
I’d say you still have a choice, on my view of what “choice” is, but let’s toss that to the side. (Note, though, that you’d still probably feel, at any rate, the same as you ever do; you’d feel like you were making a decision.) Anyway, if you were to become convinced that this was what the predictor was doing, that this was what was going on, you’d start to think “Hey, I hope I end up choosing to take the one box instead of both”, wouldn’t you?
I believe in fairies, clap clap.
But if you really believe that the universe is deterministic and the predictor is accurately predicting things based off of that, shouldn’t you at least think to yourself “Boy, I hope I end up taking one box instead of two”?
So this is the central point then. You don’t accept the possibility of the scenario envisioned in the Problem. You don’t accept that there can be a creature with extremely accurate powers to predict human actions. (It would have been very easy, by the way, to make this clear in answer to one of the first questions in the post of mine you wrongly found objectionable.)*
This is fine, but it makes discussion of Newcomb’s Problem moot since the problem presumes that such a creature is possible. All the interesting meaty philosophical discussion to be done about the Problem proceeds from that presumption. If someone doesn’t think the creature is possible, there’s not really anything interesting to be said about Newcomb’s Problem in conversation with that person.
The question whether it is possible to predict human action to arbitrary degrees of accuracy is an interesting question in and of itself, but Newcomb’s problem is not the thought experiment best suited to such a discussion.
-FrL-
*You may have thought it already apparent that you don’t think the creature possible. But a friendly and congenial reading of my “objectionable” post would have revealed that its author may have missed that fact, and might be well-served (as might the dialogue) to have that fact kindly re-iterated to him. By saying something like, “No, I actually don’t think I’m more likely to get a million dollars by choosing one box.” Perfectly valid answer to one of those questions. It’s what you should have said.
Unless I was simlultaneously recieving huge amounts of new observations that object persistence is how the world works, I don’t think I’d survive until the end of the demonstration. After all, if my internal organs haven’t vanished, leaving me writing on the floor to soon die, then they’ve clearly persisted in existing, which is a fact worth observing.
I suppose you could demonstrate that a localized area which I am outside of does not hold with the principles of sane reality in such a manner as to be convincing, but you couldn’t get me to go in there, not for a million dollars.
(Note that this is beside the point; the scenario under discussion explicitly does not allow for breaking of object persistence. If the million is in the box, it’s there before you choose, and still is after.)
Oh really? How about I go look in the box, and then decide whether to push the button. If it’s there, I won’t push the button (why should I? It’s already there), and so the object shouldn’t be there–and your scenario breaks. Or, better yet, if I find the box empty, I’ll stand there watching the space and push the button - either object persistence breaks and the thing suddenly appears now, or the box remains empty and your scenario breaks, or the object was there the whole time and I therefore didn’t push the button, causing the object not to be there so I do, which causes the object not to be there so I don’t - BOOM! Paradox.
This whole predictor thing is a minefield of paradoxes, and, of course, paradoxes never happen. The only two scenarios where paradoxes don’t occur are ones that object persistence doesn’t apply, and ones where reverse causality doesn’t work. Guess which one I’m betting on?
You should slap whoever taught you about correlations. Clearly this is not true. There is a perfect correlation between me dropping a ball in open air in calm conditions, and it falling downward. However there is certainly a difference between the notion that my dropping the ball caused it to fall, and saying that the ball falling caused me to drop it.
The fact is, and as I explained in my previous post, the mere correlating of two things does not say that causality has been voted out of office. It just means that you don’t know (or at least aren’t asserting) what the actual causes are. It doesn’t mean that causes don’t “matter” or don’t exist, or that you can therefore you can just randomly swap in reverse causation without being flat wrong.
Given that he ended with “If the answer is, instead, “Yes” then put BegBert2 back on the line,” I’m thinking I wasn’t out of line seeing the little stampede of questions as being a deliberate jab at me. And, since I read through the questions and didn’t find an answer path that reached the end based on true answers, the list specifically was telling me not to respond to it. I still don’t really think it’s worth my time.
“Let me ask you this: Do you think it is true that people who choose one box generally get a million, and people who choose two boxes generally get a thousand?” NO.
“If the answer is “No,” then why not?” Because I think that they generally got a million dollars. “Get” implies present or future events, for which we have no evidence.
“If the answer is “Yes,” then do you think you yourself are more likely to get a million by choosing one box and more likely to get only a thousand by taking both boxes?” Note that I’m specifically NOT supposed to answer this, since my prior answer was ‘no’. Nonetheless, the answer to this is also no. I actually don’t know the method the predictor is using; a few methods have been mentioned already where he could be right about all who go before and be wrong about me; notably the ones where he operates in parallell universes and keeps trying in each universe that he doesn’t make a mistake in, and the one where it’s a scam.
“If the answer to that is “No,” then why not, given that you’ve already acknowledged that, in general, people who take one box get more than people who take two boxes.” I HAVEN’T acknowledged this, and I explained my reasoning above.
“If the answer is instead “Yes,” then do you think you get more money by taking one box than you do by taking two boxes?” Again, I’m specifically not supposed to answer this, since I previously answered ‘no’. And the answer is, again, no.
“If the answer to that is “No,” then why not, given that you’ve already acknowledged that you are more likely to get a million dollars if you take only one box?” I have of course NOT acknowledged anything of the sort. Regardless, the reason I answered no is because A+1000 > A. Would you like to cover this by cases? If box A is full, regardless of the methodology that led it to be filled, then taking one box gets you $1000000, and taking both boxes gets you $1001000. $1001000 > $1000000. If box A is empty, then taking both boxes gets you $1000, and taking one gets you squat. $1000 > 0. In ALL cases, taking both boxes would get you more than taking the one. (This should be no surprise to persons acquainted with addition.)
“If the answer to that is “Yes” then do you think the best strategy is to take one box?” AGAIN I’m specifically not told to answer. The answer is of course AGAIN no.
“If the answer here is “no” then why not, given that you’ve aknowledged that by taking one box you probably will get more money,” Again, I’m told not to answer, and, given that I’ve aknowledged no such thing, I have no problem thinking that taking more money is the best strategy, given that in the scenario presented the optimal strategy is defined as that which gets you the most money.
“If the answer is, instead, “Yes” then put BegBert2 back on the line.” And this is where he specifically told me to blow off his whole post. Which I obediently did.
Right. A friendly and congenial reading of that post required that I not answer it at all, since the condition for my responding was quite firmly not met. And if you didn’t know what side of this discussion I’m on, why did you arrow that post at me?
Sure, easily. People are fairly predictable, after all. Of course, the result of believing the predictor is pretty good would be an increased expectation that box A will be empty, which is increased incentive to take both boxes (so that I at least get something). No matter how good a predictor he is, though, he’s not going to stop A+1000 > A.
If I’m compelled then the question is moot; the predictor could force me to take one box. However, that’s the stupid thing to do, faced with a full box A, taking both boxes is still the best choice.
The only “booga booga” that matters is if he makes the contents of the box disappear if I pick both, or appear if I pick just one. And trust me, I would be very suspicious about that sort of “magic” if I encountered this scenario in real life.
Nothing is going to convince me that the past can be changed by my present actions, given that that’s paradoxial and, therefore, impossible. From there it depends on how we’re using the term “scam”. He could be easily shown not to be bilking people out of their money, if he’s not actually charging anything. Proving a total absence of funny business would be tougher, as it’s is going to approach proving a negative, which is always tricky. Even so, proving a total absence of funny business does not mean I can control the past with my present decision. It just means that I have not reason to believe box A will be full.
You live in a messed up universe that bears no relation to mine. In my universe I can’t change the past by deciding I “went” the other way or did the other thing. If I’m in the slow line at the supermarket, I can’t decide to “have gotten” in the other line. This is because in MY universe, the situation with the past is NOT symmetric.
Stop pretending causes don’t exist. The usual “postdictor”'s sayings are caused by your present choices, which is fine, forward causation. A postdictor who changes his past is essentially the same as the situation we’re discussing as viewed at the time of the prediction being made, and doesn’t ever happen.
There’s a whole 'nother debate here as to what a “free” choice is, in cases like this where external factors have a CAUSAL influence on the choice, but relating to the situation at hand this is misdirection. Suppose that the “correlation” is 100%; When the day is hot, Todd ALWAYS buys a strawberry ice cream. Next assume that this correlation is, shall we say, enforced (as it might be in a causal relationship); it’s not only a useful predictor, it’s a certain predictor. So, on every hot day, Todd WILL buy an ice cream.
How much choice does he have?
No, or they could be influenced by similar factors that aren’t sufficient to eliminate free choice, it could just be an amazing random coincidence. Do you have any evidence to lead you to believe that this “correllation” could not just fail to be true in the future, starting at any point?
The only sane way to operate under a deterministic system is to either pretend you are not deterministic, or to stop thinking entirely. So, I would not “hope” to choose one box; that would be stupid of me, since A+1000 > A. I would just lose any last vestige of hope that there would be any money in A for me to take.
The discussion is not over whether the creature described is possible or not. So my not knowing that you believe the creature impossible does not constitute a failure to “know which side of the discussion you’re on.”
Begbert, you seriously an completely misread the intent of my series of questions. I was trying to become clear on your position, for the sake of mutual understanding and dialogue. You seem to have interpreted it as some kind of attack. I don’t know why. But the fact is, you couldn’t be more wrong about that post. Please reconsider the attitude you are taking toward that post and my person.