It doesn’t do so to my knowledge, but maybe you can show otherwise?
That’s right. That’s exactly what it sets out to disprove. A compatibilist or hard determinist (i.e., not a libertarian as defined above) would say that all human actions are predictable at least in principle.*
-FrL-
*There are actually some substantial complications that can be introduced into the trichotomy I’ve described here, but this isn’t the place for it.
“A causal connection of some kind” can’t possibly refer to the idea that money being placed in the box directly causes the chooser to leave behind the $1000, so you must be talking about either reverse causation, or the prediction and decision being both caused by some common detail(s) that are available prior to the prediction. However if it is not reverse causation but instead the prior details, then it is not correct to leap from “the predictor is generally right” to “the one box is likelier to be full if I leave behind the other”, as that statement is only true if reverse causation is actually in effect. If we are predetermined we might be unable to choose both boxes, but that still doesn’t make choosing only one a good idea, any more than hitting the ground at high velocity becomes a good idea in cases where you are unable to prevent it.
So, since the ‘shared prior causation’ line of thought would make your argument fallacious, I assumed you were referring specifically to the ‘reverse causation’ line of thought when you mentioned “a causal connection of some kind”.
[QUOTE=begbert2 However if it is not reverse causation but instead the prior details, then it is not correct to leap from “the predictor is generally right” to “the one box is likelier to be full if I leave behind the other”, as that statement is only true if reverse causation is actually in effect. [/QUOTE]
That doesn’t sound right to me. If A and B correlate, and it is found that they correlate because of some cause common to them both, then A tends to support predictions of B. (Doesn’t it? It seems so to me, intuitively, but maybe I’m wrong.)
In other words, I’m pretty sure that prediction of B based on A does not require that A cause B. Prediction of B based on A can be supported by there being a cause common to both of them.
Maybe the support isn’t as strong, but it doesn’t seem like there’s no support at all.
If your choice about taking or leaving behind the $1000 can’t retroactively empty or fill the $1000000 box, then there is no reason to leave behind the thousand. Period.
If there is not retroactive causation, then the predictors high accuracy (regardless of its cause) demonstrates only that he he extremely good at determining which people are going to make the wrong decision. This makes him a good guesser or a good judge of people or a creature with a sharp eye on the predetermined mechanics of the universe, but it still doesn’t make the decision to leave behind the thousand right. That decision can only be right if the decision to leave behind the $1000 or not can alter the contents of the other box.
Well, in terms of conventional probability, if A and B have any positive correlation at all, not necessarily perfect but any, then conditionalizing on any one raises the probability of the other. So, if P(John does X | Predictor says John will do X) is any higher than P(John does X | Predictor doesn’t say John will do X) [i.e., if the predictor can do any better than random guessing], then P(Predictor says John will do X | John does X) is going to be higher than P(Predictor says John will do X | John doesn’t do X) as well. You can see this right off the bat with Bayes’ Theorem.
For example, if the predictor has probability 45% of correctly saying that John will do X, 40% of correctly saying that John won’t do X, 10% of incorrectly saying that John will do X, and 5% of incorrectly saying that John won’t do X, then P(Predictor predicts John does X | John does X) is 90%, while P(Predictor predicts John does X | John doesn’t do X) is 20%. Conditionalizing on “John does X” gives a significantly higher probability that predictor thought he would than conditionalizing on the alternative action.
Nothing happens in the “guaranteed perfect prediction” case that isn’t just the limiting case of the “x% accuracy” case as x% goes to 100%. If there’s any correlation at all between two events, then information about one can and will give us information about the other.
I suspect you may want to embrace a view something like where the player’s choices each give different probability distributions on the state of the world, but the choices themselves are not part of any probability distribution; i.e., where we can say “If the player does X, then Y is true with probability Z” but we can’t say “The player will do X with probability Z” for fear of removing his free will or some such (after all, consider the limiting case as Z goes to 1). But such a view is difficult to reconcile with the notion of accurate prediction; if there’s no ability to speak about the probabilities of various actions being undertaken by the player, then what does accurate prediction even mean? So maybe you won’t embrace such a view, but it does seem the only path to go if you want to deny the importance of the fact that the epistemological probability of a past event can be heightened by knowledge about a future action which correlates with it.
Well, there’s nothing in your argument that’s really limited to the case of guaranteed perfect accuracy, is there? Certainly, you could imagine a predictor showing 90% accuracy, a desire to inductively extrapolate from this, etc.
Incidentally, I read recently a perhaps cleaner example of what I suppose is the nub of this problem, one that doesn’t need (as much?) pondering about the complications of human predictors.
Imagine an alternative history of the world in which it was discovered that, though there is high statistical correlation between smoking and lung cancer, there is no known mechanism by which tobacco induces lung cancer. Rather, it is discovered that there is a semi-prevalent gene which both causes a desire for tobacco and causes lung cancer. People with this gene get lung cancer whether or not they smoke; people without this gene don’t get lung cancer, whether or not they smoke. However, people with this gene are discovered to be much more likely to take up smoking, for whatever reason. For everyone, smoking is a mildly pleasurable habit, and of course lung cancer is an extremely unpleasurable thing.
Should you smoke?
Two-boxers: Yes, you should smoke, because whether or not you have the lung cancer gene is already fixed, and your smoking isn’t going to change that, so why pointlessly deny yourself the pleasure?
One-boxers: No, you should not smoke, because your smoking significantly raises the evidential probability that you will get lung cancer, and it would be “irrational” to do so for such mild pleasure.
I will say that I’m in the latter camp here, though I imagine most people will feel an even stronger revulsion at this position here than at the one-box decision with Newcomb’s paradox.
Though, also, I would say, if you get genetically tested and come into knowledge as to the state of the relevant gene, then you should smoke, as it would no longer have any ability to affect your evidential probability of getting lung cancer (its only means of doing so being by serving as evidence helping you decide whether or not you have the gene).
Probability is only about things that haven’t happened yet. For things that have already happened, be it your the state of your genes or the contents of your boxes, the term “probability” has essentialy no meaning.
And no, being ignorant about something that has happened doesn’t make it not have happened yet, and doesn’t change the total lack of effect of ‘evidential probability’ on the situation.
Well, that’s an interesting view to take, though it’s fundamentally quite different from mine. I don’t think you really hold to it all that strongly, though, by which I mean to say that I think you do often consider epistemological probabilities which account for ignorance of the past to be the right ones to take into account in decision making.
For example, if you were on one of those Wheel of Fortune sudden death rounds and had one chance to pick a letter which was in the puzzle, I think you would rather say “r” or “s” than “q” or “x”, on the grounds that the probability of a hit is greater with the former than the latter; this despite the fact that the actual solution was already “locked in” in the past, so to speak. Or, similarly, I pick a random number between one and ten, write it down out of your sight, and ask you if you’d rather put money down on it being prime or on it being even. Or any number of such things. You wouldn’t really consider the probabilities about past unknowns to be meaningless in such cases, would you?
Okay, stay with me here. When I deal with an unknown past event, such as a preselected number being prime or not, I recognize that, even though I will typically gauge the “odds” of each outcome happening based on the relative probability of each outcome occuring in a future random event if I’m inclined to make a guess at what happened, I also recognize that one outcome has indeed already occurred. That means that, even though I might calculate 50/50 or 60/40 or whichever odd, I actually know that there’s not 60 percent of one outcome and 60 percent of the other waiting to resolve themselves; there’s actually one, fixed outcome, and it doesn’t care what I think about the matter. I might calculate that there’s only a 1 in 60466176 chance that my handful of dice all came up all sixes, but if that’s what’s under the cup, then that’s what I’m going to find there when I look, with 100% odds.
But really, in Newcomb’s paradox (and the smoking scenario), the odds and correllations and whatever don’t matter one bit. Not even for guessing. Here’s why:
Let’s start by looking at your amended answer to the cigarette scenario. In that, you say that if you knew what your gene actually was, then you would smoke. I assume that means that, if you knew that you didn’t have the gene, you’d smoke, since you’d know you were safe; and if you knew you had the gene, you would smoke, since you were doomed anyway.
The thing is, you also know that you either have the gene, or you don’t have it. One of the two conditions does apply, and it’s not going to change just because you take a puff. So, logically, you can draw a chart:
If I have the gene, I should smoke.
If I lack the gene, I should smoke.
This is correct so far, right? Well, since you know that there isn’t a third possible case (that’s important, so you don’t make the fallacy of False Dillema (aka Fallacy of the Excluded Middle)), you can logically conclude that the disjunction of the two conclusion is true (that is, the ‘or’ of them). In english terms, it means that either you should smoke, or you should smoke. One or the other. (Pick one.)
The same logic applies with the boxes. Suppose we do like I casually suggested at the end of post #195, and imagine if both boxes were transparent. This gives you two scenarios to consider.
Suppose you saw that the $1000000 box was empty; your choice is to take $1000 or nothing. Naturally, you take the thousand dollar box (and get the other one too as a worthless bonus). In this instance, as it happens, the predictor was correct about your choice.
Now, suppose that you saw that box held the million. At this point, you know that you are going to get the million; you have to take that box either way. If you’re trying to maximize your gain, do you take both both boxes, or just the one? Is $1001000 more than $1000000? Yes. So in this case also, you take both boxes. The predictor was proven wrong, but that’s okay; you still have the extra thou to buy a few extra twinkies with anyway.
So. If the box is empty, then you should take both boxes. Alternatively, if the box is full, you should also take both boxes. So, since you know the box is either going to be full or it’s going to be empty, you should either take both boxes, or you should take both boxes. (Pick one.)
This strategy of looking at all the possible cases, and noticing when they indicate the same result is, unsurprisingly, called argument by cases, and is indeed a logically sound and absolutely certain and correct way of reaching a conclusion, as long as you avoid the fallacy of excluded middle, of course. Which in this situation, we can.
Right, I understand argument by cases. The thing is, like I said above, argument by cases isn’t really deductively valid here without some further premises, since I could also use argument by cases to say “If I will be accepted to Stanford, then I shouldn’t do my homework, and if I won’t be accepted to Stanford, then I shouldn’t do my homework, and certainly one or the other of those two conditions applies, so I shouldn’t do my homework”. You will, of course, say that this is dis-analogous, the future is different from the past, and that may well be the case, but whatever the difference is, it’s clear that it’s not enough to just say “argument by cases” and have that be the end of it. If the logical form of the argument was valid with your argument by cases, it would be valid with mine. You need to bring out some further premise, like “The past is determinate in a way which the future isn’t” in order to say that your argument is logically sound and mine is not.
If we were to really model this with formal logic, I would say that a blind appeal to argument by cases in this sort of situation falters on an unjustified (without further premises) distribution of disjunction across a box-like modal operator: letting x mean “x holds true no matter what I do”, and A and B be the two possibilities for the unknown, and C be the statement that some particular strategy is the best, you’d be moving from A -> C and B -> C and (A v B) to C, which is not generally valid. Argument by cases would let you move from A -> C and B -> C and A v B to C, but it is not generally valid to move from (A v B) to A v B, so this wouldn’t work without some further premise.
And yes, I know the problem description says your actions cannot affect what’s in the sealed box. I take this only to mean that the state of the sealed box at the time the predictor seals it remains its state at all future times; I do not take it to mean that the state of the sealed box is probabilistically independent of your actions, for this would be precisely to say that the predictor’s predictions are probabilistically independent of your actions, and thus imply that he has no real predictive ability, of any accuracy at all (his predictions give no new information; he could be matched/surpassed in predictive power by a painted sign which just made the same constant prediction every time). It seems absurd to me to take the predictor to be so trivial, so I reject that interpretation.
Reverse causation again. :rolleyes: Didn’t I demonstrate that if reverse causation is at all real, then you can reach a contradiction extremely easily, which, of course, demonstrates that reverse causation is absolutely impossible?
And, more importantly, doesn’t the problem literally and unambiguously state that there is not reverse causation occuring, because if there were, the contents of the box would have to alter to accomodate your choice when* you make it, which they don’t.
Essentially reverse causation is a silly idea, not worthy of note in general, and it explicitly does not apply to the situation at hand.
Since reverse causation does NOT apply here (or anywhere else either really), “x holds true no matter what I do” is by definition true for all past events, so A -> A and B -> B are both true, so we just wipe away the rather incoherent operator from out statements and do cases on A v B directly. The supposed objection to argument by cases simply vanishes, since all the symbols vanish.
I can understand why a person would wish that they could retroactively alter the past to make certain that they get the million. It’s a pipe dream and cannot occur in the scenario as presented, but at least it’s a desire I can understand. What I cannot understand is pretending that there is no difference between the past and the future, that you’re some kind of ‘aging backwards in time’ Merlin person, in order to try to prop up some kind of flimsy defense for this impossible pipe dream, even when you have to ignore the terms of the scenario under discussion to do it. That’s just wrong, man.
On preview:
This is either the “excluded middle” or the “no true scotsman” fallacy; I’m not entirely sure which. Certainly a predictor can do better than “no predictive ability, of any accuracy at all” without either controlling the future or altering the past; weather reporters do this all the time. They’re not generally perfect, no, but they’re not completely random, either. Not that you can’t also meet the problem’s description of a predictor with lucky guesses; there’s nothing in the scenario that says he’s not just guessing, regardless of your personal desire to ignore and modify the conditions of the scenario to meet your personal esthetics. But, even if you don’t like random guessing there’s still a wide range of more probably accurate predictive methods available; there’s no reason to bring in impossible powers into this, much less powers that are strictly prohibited by the scenario.
And speaking of impossible powers, when you give yourself the ability to alter the past with your decision, that’s not a power you’re giving the predictor. That’s a power you’re giving you. Anything the predictor was going to do happened when he put the money in the box, and that’s over and done by the time you want to go around choosing them. So no matter what, unless you are a god, your decision isn’t going to magically fill up that box, so you’re still dealing with logic by cases. Both in real life, the scenario presented, and any other consistent universe you can create, this is true. Whether you reject it or not, really.
I recognize that that meaning of “when” can be a little confusing when you’re talking about doing something that impossbly changes history, but I trust you can figure out what I’m saying here.
I don’t believe so. Spell out for me formally the logical proof of contradiction, and show why it doesn’t affect “normal” causation.
Absolutely, I agree that weatherman have good accuracy. I’m a complete believer in the existence of high accuracy which can nonetheless be short of perfection. And my point is, just as the weatherman saying it will rain increases our reason to believe that it will rain, so, similarly, does observing it rain increase our reason to believe that the weatherman said it would rain. If P(A | B) > P(A), then P(B | A) > P(B) [and, thus, P(B | A) > P(B | ~A)]. And yet, nobody complains that this is an example of spooky backwards causation. If for some reason you wanted the weatherman to have said yesterday that it would rain today, then you would prefer to see it rain today over seeing it sunny today.
I’m not altering the past with my decision; I don’t even know what it would mean to alter the past (to me, an alteration is nothing more than a difference between some value at one time and at another. There can, of course, be no difference between a value at one time and at the same time.)
Here’s how it goes:
Today, I don’t know what the predictor put in the box yesterday. I do know that whatever was put in the box yesterday is the same as what’s in the box today. No matter what I do, whatever was put in the box yesterday will be the same as what’s in the box today. It could be that a million was put in the box yesterday and is still there today, or it could be that nothing was put in it yesterday and nothing is in it today. I don’t know which it was. But various things might give me more reason to believe a million was put in; for example, if a friend told me he saw the predictor taking a million out of the bank, then I’d be very happy, because now I have much heightened reason to believe I’m going to get a million. If someone were to ask me “Would you be happier if a friend told you he saw the predictor taking a million out of the bank or if would you be happier if no friend told you any such thing?”, I’d say “Well, the former, obviously.”
Another thing that might give me good reason to believe that the predictor put the million in yesterday is if I take just the one box. The reason this gives me good reason to believe that the million was put in yesterday is is because of the correlation which we call the predictor’s accuracy. It isn’t watertight, but it’s something. And so if someone were to ask me “Would you be happier if you took the one box or would you be happier if you didn’t?”, I’d say “Well, the former, for the same reason I would be happy to get any evidence that I’m likely to get the million.”
At no time, though, do I harbor a belief that the contents of the box today will or even could be different than their contents yesterday. It’s just that observations I make today, including observations of my own actions, might cause me to reevaluate how much credence I give to various possibilities for what happened yesterday. And as a result, I would be happier to encounter certain observations than certain other ones. And, in particular, I would be happier to observe myself taking certain actions than to observe myself taking certain other ones. And because of this, I consider certain actions to be more “rational” than other ones. But there’s nothing in here about “changing the past”, whatever incoherently that would mean.
And, as so many times above, though this will never work, why is noting that today’s actions can correlate with yesterday’s events succumbing to belief in “changing the past”, in your eyes, while noting that today’s actions can correlate with tomorrow’s events is not objectionable? I wouldn’t call this “changing the future” either, in that whatsoever is true about the future remains unchangedly true about the future; if “Global warming will kill us all in the year 3800” is true today, then it will remain true tomorrow, next week, next year, forever, and was true yesterday, last year, and always has been. The future never changes, in this sense, regardless of what we do, just as the past never changes. All that changes is my knowledge/information about the future, but then, there’s nothing odd about the fact that my knowledge/information about the past can change in the same way.
I missed the edit window, but I wanted to reword this to clarify:
If for some reason, you hope that the weatherman said yesterday that it would rain today, but you have no knowledge of what he said, other than the fact that he’s generally accurate, then you will prefer to see it rain today over seeing it sunny today. The former gives you good reason to believe that what you hope for is true; the latter gives you good reason to believe that what you hope for is false.
Let’s see… any predictor that could enact reverse causation to the degree required to ‘fix’ the Newcomb paradoc could also present a scenario where there are two boxes and one pile of money, and whichever box you pick, the money would be in the other one. Further, he could establish this causation with glass boxes. (Unless you’re going to tell me that there’s something magicaly reverse-causation-preventing about glass.) Further, presume that the chooser will choose the box with the money if possible.
So. The chooser sees money in box A, chooses it, which causes the money to have always been in box B, which causes him to have chosen box B, which causes the money to have always been in box A, which causes him to always have chosen box A…
This is a contradictary causal loop; one that cannot possibly resolve to a consistent set of circumstances. Further, ANY entity or effect which could do the reverse causation necessary to make one-boxing rational in the Newcomb scenario, could also set up this contradictory scenario - and no entity or effect can possibly exist that can set up an inherently contradictory scenario.
Ergo, no entity or effect can enable reverse causation.
(Hopefully this is formal enough for you. I suppose it would be fairly straightforward to show that there is no possible world where A <-> ~B, X <-> ~Y, A -> ~X, B -> ~Y, X -> B, and Y -> A are true, but I’m not really up on my modal logic notation - plus you already redefined the , to mean pretty much the opposide of it’s formal logic meaning (“It is necessary that”), so it would just be confusing I think.)
This all works, because it’s not backwards causation. Backwards causation is not necessary for reasonably (or very) accurate prediction. It IS, however, necessary to justify picking only one box, since absent backwards causation, it is not possible at the time of your choice for the contents of the boxes to be influenced in any way by it.
Sensible so far - all events are caused by proir events.
Congratulations, you feel better. Should we go over the bit where correlations are either random (and thus not reeealy correlations), or caused by prior events influencing later events? As the latest event applicable to the situation, any free choice you make in the matter is not really subject to correlation, and certainly cannot increase the probability of any particular past action occurring.
I guess what I’m saying is, that by the nature of free choice, your freely chosen decision cannot be causally correllated with the prediction or the contents of the boxes, all prior accuracy aside. I mean, I don’t really see how the specific situation could be ‘partially influenced’ like you often see with correllations, and the one sample (you) isn’t really enough to garner you correlative effects from bell curvish stuff… No, I really don’t see how your choice really can be actually correllated, not if you actually have a choice.
If your choice is not free, then of course that’s another matter - but the Newcomb paradox specifies that you have a choice, which sounds contradictory to a predetermined scenario to me.
Unfortunately, I know of no instance or reason to believe that free actions made by the tail end of a forward-causal correllation can effect the “odds” of prior events. That’s not really how correlations work, y’see.
Err, says who? Presuming a fixed future presumes predeterminiation, which seems explicitly contradicted by the statement of choice in the scenario. Even in general there is no reason I know of to either presume or act like there is a fixed future as opposed to a ‘branching multiple possibilities’ scenario.
If you CAN choose, then you should choose both boxes, if your aim is to maximize your take, since nothing you can choose to do can improve your odds of getting the mil. If on the other hand you are utterly predetermined and have no choice in the matter, then, well, what will happen will happen, but that doesn’t sound like the Newcomb’s Paradox scenario much to me.
Out of curiousity, how would any or all of your answer here change if both boxes were transparent? Especially in the case where the million box were full?
All I need for my justification for picking only one box is for “I picked one box” to be much better evidence for “There was a million dollars in the box” than “I picked two boxes” would be. Accurate prediction is good enough for this. This is equivalent to the very definition of accurate prediction.
As for your proof, it shows that if people always take the box with the money, no predictor could accurately predict this and put the money in the other box. Sure. But this doesn’t rule out accurate predictors who don’t try to set up such situations, or accurate predictors who set up such situations in a world where, for whatever reason, players don’t always take the box with the money. Accurate prediction is not, in itself, contradictory (of course it isn’t). And I’m not sure what reverse causation has to do with anything; apparently, “reverse causation” is just a phrase for “ultra-(guaranteed?) accurate prediction”? Even that isn’t contradictory in itself, of course; it’s only contradictory for it to be put to use in such fixpoint-lacking situations as your example. Your example does something, something significant even, but it doesn’t completely destroy the possibility of ultra-accurate prediction (and since this seems to be what you equate with “reverse causation”, it doesn’t completely destroy the possibility of reverse causation).
[/QUOTE]
Why shouldn’t I take the action that gives me the most reason to believe in the truth of my desired outcome? As for when you say “any free choice you make in the matter is not really subject to correlation”, wha-buzzah?! This seems to be denying, among other things, that anyone can make any predictions, with any level of accuracy, about free choices. Surely you don’t mean that; indeed, you’ve explicitly said you don’t believe that.
You don’t think a free choice can be correlated with anything? The very fact that we can make fairly accurate predictions about quite a lot of human actions which would normally be considered free choices suggests that quite a lot of free choices do have correlations with previous events.
What is the way that correlations work? Certainly, in terms of the math, conditionalizing on either end of a correlation gives an increased probability of the other end.
I’m not saying the future is fixed, as such, to such extent that this means anything, but it seems natural to say that the truth value of a statement explicitly tied to a specific time is independent of the time at which its truth is evaluated. It’s not like the statement “Global warming will kill us all in the year 3800” can flip-flop in truth value over time, unless by flip-flopping you’re referring to the fact that sometimes we know and sometimes we don’t, in which case, the same can happen with statements about the past.
Well, of course, we both think our solutions maximize our take. I think mine maximizes my take because it maximizes the probability that I discover a million.
If I could see the contents of the sealed box, I would take both boxes, same as how I would smoke if I knew the value of my smoking/lung cancer-gene. Once having access to that information, it would no longer be the case that my actions could increase the evidence for my desired outcome.
Oh, I forgot to reply to this. In the specific case of alethic logic, is most often used to mean “it is necessary that”, but there are many more modal logics which use box notation. In general, is used for any logic satisfying the property that “A1 and A2 and … and AN implies B” is a theorem whenever “A1 and A2 and … and AN implies B” is a theorem. Semantically speaking, x is usually used to mean “x holds at all possible worlds accessible from this one”, for various concepts of “accessible”. In this case, I was taking “accessible” to mean something like “compatible with everything I know to definitely be the case”, which is a fairly standard employment of a box operator.