Newcomb's Paradox

Also, this set of conditions isn’t contradictory; you can take A to be true, B to be false, X to be false, and Y to be true, and thus satisfy all these conditions. I imagine you just mistyped something, but I’m not sure what each of your letters are supposed to stand for, so I couldn’t say what.

What almost doesn’t make sense? I’ll try to explain it more clearly, if I can.

As it happens, I’m cool with branching multiple future possibilities, but I’m also cool with branching multiple past possibilities. Just as tomorrow’s coin flip corresponds to two branches of the future coming out of today, the bit on my hard drive which I erased yesterday corresponds to two branches of the past coming out of today.

I thought a compatibilist was just someone who saw free will and determinism as reconcilable, not necessarily someone who actually subscribed to determinism.

If this was true, it would still be true if the boxes were transparent. Your ignorance about the contents of the box has no impact whatsoever on the skills of the predictor or the contents of the box.

Reverse causation does NOT mean “ultra-accurate prediction”. It means, as related to predictions, “predictions that cannot be wrong because when the predicted event finally occurs, the prediction alters the past to change the prediction itself so as to ensure the accuracy of the (altered) prediction.”

Naturally, this “changing the past, altering the prediction” thing is the only way that you can increase your “odds” that there is currently $1000000 in the box, because, of course, if you can’t change the past to change the prediction, then you can’t effect the contents of the box, which were put there in the past due to that prediction. You seem to understand this at some level, since you realize that if you knew the contents of the boxes, your choice would have no effect on those contents. The real question there is, sure, your knowledge effects how sure you are in your decision, but what does your knowledge have anything to do with the actual contents of those boxes? And shouldn’t the contents of the boxes be the source of your decision, not your fears or faith in the predictor?

I believe that we both agree that reverse causation is impossible, certainly in the context of this scenario and very probably in regular ordinary life (my proof is only one of many, many possible contradiction scenarios) - the primary difference between us is that you don’t yet realize that unless your decision has the power to reach back in time and toggle the contents of the box, it simply doesn’t have the power to toggle the contents of the box - and that only by toggling the contents of the box can your decision effect whether you will get the extra million or not.

I know a little about correlations, and they only occur when 1) one thing has a relatively consistent causal effect on the way the other thing turns out, or 2) when two things are both effected in a relatively consistent and causal way by one or more shared causes. Now, since we agree that there is no reverse causation, we can also agree that correlations are caused by cases where subsequent consequences of the prior event(s) interfere with the outcome of the later event(s). So, one of these must apply:

A influences B
or
Z influences both A and B

(Cases where things seem to line up due to random change are not technically correlations, since they don’t actually co-relate.)

Now, the structure of the problem leaves essentially no way for the prediction to effect your decision, unless you want to talk about incidental effects that are not part of the original scenario such as the friend watching the predictor withdrawing the cash. Absent such additions, it seems impossible, and therefore unlikely, that the predictor’s choice is going to alter your choice or make it for you.

The other alternative, with the shared prior cause Z, has a few better options, since it allows the primary thing effecting your decision (ie: you) to influence the prediction as well - as in, the predictor can watch you, study you, read what you’ve typed into this web forum, etc, and use that to influence his prediction. However, for such an analysis to be a perfectly effective predictor, you must be deterministic–that is, unable to make a surprising unexpected decision. If the analysis driving the prediction is not perfectly effective and you are thus able to make an unexpected decision, then the predictor is fallible – a condition which makes it theoretically possible to walk away with the full $1001000, which I think you’d agree is the optimally optimal outcome. Note that the statement of the problem explicitly allows for the possibility that the predictor is fallible, by not guaranteeing perfect accuracy. (Of course it certainly doesn’t guarantee that the predictor is fallible, either.)

So, if you check through this, I have determined based on my analysis of the situation and my knowledge of how correlations actually relate, that either I and probably a large swath of the universe around me are deterministic, or the predictor is fallible. This is important to know.

If I am predetermined, then I am not really making the choice, (as there is no choice), and due to that I feel that predetermination does not match Newcomb’s scenario very well. I also feel that the results are not particularly interesting or relevant, since all decisions a deterministic entity makes are in some sense optimal, since no other outcomes were possible. So I don’t really worry much about the predetermined case.

That leaves only the case were the future is malleable and not truly predictable, and in which therefore the predictor might be wrong, which is a nice thought, but not one we should get too worked up over (as the predictor has been pretty accurate so far.) More importantly, this is a scenario were, as we both agree, there is nothing that you can do to change the contents of the box. (Since that requires reverse causality, which is impossible.) Your choice about which boxes to take does not and cannot increase your probability of getting the million. Since, y’know, causes and correlations don’t work like that.

I’m saying that a truth value about the year 3800 doesn’t have a truth value prior to the year 3800. It has a probability. After the year 3800, it doesn’t have a probability anymore. It instead has a truth value.

That is, unless the universe is deterministic (and this problem uninteresting). In that case, there are no probabilities, there are only facts, fixed truth values, nothing ever changes (when looking at time as being the fourth dimension, anyway), nothing surprising ever happens, ho-hum, yawn, wake me up when it’s over - not that you’ll have any choice about whether or not to wake me up anyway.

And I think that mine maximizes my take because it is certain, with absolute 100% perfect probability, to give me the maximum amount of money that I can cause myself to get by making this decision. 'Cause we both agree that my current decision isn’t going to retroactively put more money in the box, yo.

This is by far the most interesting part of your post; since it thoroughly validates the proof by cases:

If you knew that the million was there, then you choose both boxes.
If you knew that the million was missing, then you choose both boxes.

Now, I take one look at this and say to myself, “Self, this guy doesn’t care whether the money’s in there or not; either way, he’s going to take both boxes. So why does he need to look? He already knows what to do in either case.” To me, you’ve already agreed with my position, and just haven’t realized it yet.

Still, you clearly haven’t realized that you agree with me yet, so here are a few more questions for you to think on.

  1. Suppose you came into the room to make your choice, and the boxes were opaque, but you were allowed to open them and look inside them if you want. Now, if you open them, you’ll know what’s inside, and then that’ll be like the glass box scenario and so you’ll choose both boxes, regardless of the actual contents of the boxes. Now, as you stand there with your hand on the lid of the box, you know without opening it how you’ll choose if you do open it; the actual contents of the box are irrelevant. So, do you actually need to open the box? What, if anything, would that actually change, regarding your final decision?

  2. Suppose you came into the room, made your choice of one box, and then after making your choice were allowed to open the boxes. (You know, to get your money out.) At that point, seeing that there actually was the million in the box, would you regret not taking the thousand as well? After all, you now have that knowledge you need for the glass box conclusion of wanting to take both. Or, god forbid, what if your box was actually empty? Would you regret not taking the thousand? Is there any result where you wouldn’t end up regretting that you chose only one box, and left the thousand on the table?

  3. Suppose a close trusted friend you knew looked in the boxes. Now, you don’t have the knowledge of what’s in the boxes, but your friend does. Is that enough for you to feel like you were in the glass box scenario? (If you asked, he would tell you to take both boxes, since your friend is in the glass-box scenario now; but that doesn’t tell you anything about the actual contents of the box, or anything informative really.)

3b) Suppose a perfect stranger looked in the two boxes. Would their glass-box knowledge be enough for you? (Note of course that regardless of what they see in there they’d tell you to take both boxes, since they have glass-box knowledge, so there’s really no point in asking their opinion.)

3c) Suppose the predictor had knowledge about what was in the boxes. Would the knowledge that, were they you, they’d know to take both, be enough reason for you to take both?
On preview:

How would that work? Does it matter if you remember what the bit was? What if somebody else who isn’t you remembers? And what does the bit care about your knowledge or memory of it anyway?

Really, this makes no sense to me. How to you arrive at the conclusion that it is not the case that what happened, happened? Is there any evidence for this? I mean, since I don’t subscribe for one instant that a particular past event cares what you or anyone else remembers or thinks about it, your position would seem to imply that past events flip and toggle willy-nilly, whether you remember them or not. This would seem to put a kibosh on facts. I mean, one moment we had done a moon landing, the next, it was a fake. One minute there was a holocaust; the next: denied. One minute I lived long enough to reach the current date, the nex

I wanted to reword this bit from above:

(the rewording is in bold, of course)

Just because something would be the most preferable if I knew A to be true, and would be the most preferable if I knew B to be true, it doesn’t follow that it would be the most preferable if I knew that either A or B were true.

Example: Supposing a factory mails out packages of four kinds: 1% come in a box of gold worth a thousand bucks, and contain within them a million bucks; 49% come in a box of gold worth a thousand bucks, but are empty inside; 49% come in a worthless cardboard box, but contain within them a million bucks; and 1% come in a worthless cardboard box, and are empty inside.

My randomly assigned package is in the mail on the way. If I somehow knew that my package contained a million bucks, then I would prefer to see a gold box on my doorstep over seeing a cardboard box. And if I somehow knew that my package was empty, then I would prefer to have a gold box arrive over a cardboard box.

However, even though I know that my package will be either empty or contain a million bucks, in general, not knowing which, I would rather have a cardboard box arrive than a gold box arrive. This despite the fact that, if I called my buddy at the factory up and asked him “Is my box empty or not?”, no matter what the answer he gave, my preferences would change, and I would then prefer to receive a gold box over a cardboard box.

It’s because of things like this that I think appeals to “argument by cases” are too glib in this context.

I promise I’ll respond to the questions in begbert2’s post later tonight; however, just quickly, I want to toss one more thing out there:

Suppose we take a different scenario than the OP. The predictor sets up shop with his boxes, but it’s not that people necessarily make their own decisions as to which boxes to take. Rather, people’s selections are made for them by their Xaetaeter, some unspecified selection-making thing. Time and again, the predictor shows great (not necessarily perfect, but dazzling) accuracy in determining what people’s Xaetaeters will select, so that people whose Xaetaeters say “We’ll take both” generally don’t get the million and those whose Xaetaeters say “We’ll just take the one box” generally do get the million. In fact, playing the game with you, he’s shown this great accuracy in predicting what your Xaetaeter will do too.

In this scenario, as you step up to play the game yet again, and your Xaetaeter is about to announce its selection, would you prefer that your Xaetaeter say “We’ll take both” or that it say “We’ll just take the one box”?

Obviously, my followups would be: If you’d prefer that the Xaetaeter select the one box, then why would things be different if the Xaetaeter was replaced with your own decision-making ability?

And, if you’d prefer that the Xaetaeter select both boxes, why do you think the predictor’s track record at accuracy in determining what Xaetaeters select does not extend to reason to believe he will probably once again be accurate in determining what your Xaetaeter selects this time?

The difference here is that getting a cardboard box improves your chances of getting a million dollars - choosing only one box does not improve your chances of getting a million dollars.

The fix is in once the predictor puts the money in the box. What you do after the predictor puts the money in the box won’t change what the predictor predicted, assuming time only goes forward.

Huh? It’s not like if I called my friend up and said “Hey, I wanna ring in a favor… can you switch my box to a cardboard one?”, it would have any effect on its contents. All the same, I would be happier to see the cardboard box over the gold box, as it gives me good evidence for the contents being a particular way.

Anyway, that example wasn’t meant to necessarily be a perfect analogue for Newcomb’s paradox, but just to show why one can’t glibly appeal to argument by cases without further support, because using the principle in this way is not generally valid in these contexts. Knowing A might cause me to prefer X over Y, and knowing B might cause me to prefer X over Y, but it doesn’t follow that knowing (A or B) should cause me to prefer X over Y, or prevent me from preferring Y over X.

The essential question is whether the chooser’s decision is an indicator or the contents of the box. The ONLY way this can be the case is if your decision is made before the prediction. This however is not impossible; if you are deterministic and your inputs for the intervening duration are known, then your decision has essentially been made since the time the future stream of events was fixed (which might be “forever”).

If the chooser is not deterministic, then their decision is not in fact related to the predictor’s prediction. So, they should choose both boxes, to maximize their take.

If I observe any person or thing taking this test and being successfully predicted over and over and over, I will gradually come to the conclusion that that person or thing is deterministic. This includes myself, if I take this test repeatedly and am consistently predicted enough times. I might be persuaded that I am deterministic if I see other people being predicted, but only if I am convinced that their decision-making processes are analogous to mine. Not to be egotistical, but I am (no lie) rather a lot smarter than the average joe, and I find that other people tend to be more predictable than I find myself to be (and I even have inside information on myself!) So, I am not certain that observing other people taking the test and getting predicted would convince me I’m deterministic. (Heck, I find the notion that I am an automoton repellent enough that I might subconsciously dabble in solipsism to avoid having to reach that conclusion!)

So:
If the chooser is not deterministic, then they should choose both boxes.
If the chooser is deterministic, then they should “choose” the one box.

So, how similar is the Xaetaeter to me? I can easily be convinced that it is deterministic and predictable, and not be convinced that I am. Especially if it is a machine, and particularly if it has (as most or all machines do) a significantly more limited set of inputs than I do.

Another perhaps interesting question would be what would happen if I took the challenge over and over and over. For the first several times I would of course choose both boxes, and then (once I had a few months rent built up) I might start slipping in the “risky” choice of choosing box A once in a while at a slightly increasing frequency. If the predictor kept getting it right, I would gradually become convinced of my own non-independence, and at some point toggle fully over to somewhat dully picking the one box over and over without cessation.

I don’t know if I would really enjoy the money much though.

Seeing that you got a cardboard box doesn’t improve your chances of getting squat; your “chances” of getting the money were fixed when it was picked out and sent. Sure, seeing the brown paper might get your hopes all up, but your hopes have nothing whatsoever to do with the actual contents of the box. The gold/cardboard box scenario is of course analogous to the deterministic scenario of the Newcomb paradox. In the gold/cardboard box scenario, the box color is in fact correlated to its contents, just like the deterministic scenario.

If you’re not deterministic, then picking the one box in the Newcomb scenario is an exersize in short-term fantasy, where you make yourself feel all happy-hopeful for a brief moment at the cost of the thousand dollars.

Wow, I didn’t realize the SDMB had a post size limit. Breaking this into two.

What is relevant is the additional information that my actions can bring. If one of my actions can put me in a state where I have better grounds for believing “I will win the million” than the other, then I will prefer that action. However, if the boxes were transparent, then neither action will be capable of giving me better grounds for such belief than another; my belief will at that point be set in absolutely confident stone one way or another.

Well, I’ve never demanded that the predictions cannot be wrong; all I’m talking about is that the predictions are highly probable (but not guaranteed) to be correct.

My knowledge is what guides my decisions; if you offer me a bet as to whether the prize is behind Door A or behind one of Doors B or C, I’ll put the odds at 2:1 in favor of the latter, and use this to guide my actions; if you then reveal the prize to have been behind Door A, I don’t think “Oh, shit, I made an irrational decision, I should’ve put the odds much more in favor of Door A”. I think “I made the right decision, and got unlucky. But it was still the right decision; I was still supposed to evaluate my choices based on the probabilities based on the knowledge available to me.”

The source of my decision is anything that gives me evidence about the contents of the boxes. My faith in the predictor is one such thing, same as how I’d let my decision be guided by the advice of a trustworthy, if not guaranteedly honest, friend who had seen the boxes.

I don’t even consider “reverse causation” to be a terribly meaningful term. Your proof presents certain conditions which cannot all come together, which is great, but it doesn’t prevent some of those conditions from coming together without the rest. I hold that the correlation between my decision and past events gives me as good reason to want to make it a certain way as does any correlation between my decision and future events. I think it comes down to “Do you have any faith in the accuracy of the predictor?”; it seems to me that you need to be committed to denying that the predictor has any ability to predict your selection; i.e., that you need to be committed to saying that the predictor’s actions are probabilistically independent of your selection.

There are any number of ways correlations could come about, and I don’t care about the causal structure involved. We could imagine that there are two coins in the world, with no apparent mechanism connecting the two, such that whenever the two are flipped, they give the same results, but the actual “Head or tails?” result is perfectly random. Is the first coin influencing the second or is the second influencing the first or is there a common external cause? I don’t know, maybe none of the above, and it doesn’t matter, to all these. All that matters is, do we have good reason to believe that future flips will demonstrate the same correlation?

If the prediction is independent of my decision, then it’s not a very good prediction; if we have any faith in the accuracy of the prediction, not necessarily that it’s watertight guaranteed correct, but that it has any significant accuracy at all, then we believe P(what I do is X | what is predicted is X) > P(what I do isn’t X | what is predicted is X), and thus, by the laws of probability, are compelled to believe P(what is predicted is X | what I do is X) > P(what is predicted isn’t X | what I do is X).

If you want to deny that the prediction can have any nontrivial accuracy at all, well, alright.

I’ve never required the predictor to be infallible; all I require is that P(X is correctly predicted) > P(X is incorrectly predicted). I don’t need infallibility; I just need any reliability at all.

I am perfectly ok with the predictor being fallible, as above.

Alright then. (I think even in a deterministic case, it’s possible to say worthwhile things about preferences, but we don’t have to get into that.)

I’m not sure what you’re saying when you say “not truly predictable”. Can the future be predicted with any degree of accuracy (less than infallibility) or not?

I agree that the contents of the box before I make my decision are the same as the contents of the box after I make my decision. All the same, if I find myself making one decision, it’s good evidence for winning the million, and if I find myself making another decision, it’s good evidence for my not winning the million, so I will much prefer to find myself making the first decision. And since I will much prefer to find myself making that decision, I say that is the rational decision to make. No, it won’t reach out and magically change the contents of the box; however, the very fact that I make such a decision as opposed to another one is good evidence that the contents of the box were previously placed in the way I would like. If I already knew the contents of the boxes, then this differential evidential effect would disappear, and my preferences would change.

Why does it have a probability but not a truth value right now? Because nothing about the universe right now tells us the truth value? If there was a random bit on my hard drive yesterday which was erased/sent into a black hole/whatever, then nothing about the universe right now would tell us its truth value either. So can’t statements about the past be indeterminate in the same way?

I think there are interesting things to say about probability with a deterministic universe as well, given that the probabilities are conditioned on less than complete specifications of the state of that universe, but, whatever.

Absolutely it won’t. But if we play the game repeatedly, and have any faith in the accuracy of the predictor, any faith at all, not infallibility, just some reliability, then I will win up the million more often than you do, by the very definition of what it means for the predictor to have any reliability.

Yep.

Like I said before, even if knowing A makes me prefer X to Y, and knowing B makes me prefer X to Y, knowing (A or B) doesn’t necessarily make me prefer X to Y. Consider the example above with the gold/cardboard boxes.

And you stubbornly haven’t realized that you agree with my stated position yet either, if we’re going to speak this way. :slight_smile:

It’s like I said above with the gold/cardboard box; if I call my friend up and ask him “What are the contents of my box?”, then no matter what answer he gives me, I’ll prefer a golden box to a cardboard box. Yet, all the same, before calling him up, my preferences are opposite, I will prefer cardboard to gold. Why? Because cardboard gives me more confidence that my friend would give me the answer I want to hear from him. After I call him up, though, this difference evaporates.

So, if I don’t open the box, I’ll still want to just take the one, because this gives me the most confidence that, had I opened the box, the contents would’ve been the million. After I open the box, I’ll want to take both, because there’s no longer any difference between my actions and the resulting confidence levels as to the contents of the box.

Yes, I realize it’s odd that my preferences are one way before I get an answer to the question, and another way once I get the answer, no matter what the answer is. It’s like my analogy above with the gold/cardboard boxes, though. Is it so problematic there?

No, I won’t have any regrets. If you stuck a prize at random between one of a hundred doors, and said “You can choose to either get the contents of doors 1 - 30, or of doors 31 - 99”, I’ll choose the latter. If you then demonstrate the prize to be in door 24, I won’t think “Goddammit, I made the wrong decision.” I’ll think “I made the right decision, but was unlucky. But it was the right decision; it was the appropriate one given my knowledge at the time.”

Sometimes, the right move is to fold your hand, even though it turns out your opponents were bluffing and you actually had the strongest one. You don’t regret it, you don’t beat yourself up over it; the correct decision isn’t determined by “If I knew everything about the universe, what would I do?”. It’s determined by “Given what I do know at the time of the decision, what should I do?”.

Like above in question 1; before I ask my friend, I’ll prefer to take one box, because this correlates most strongly with the chance that, had I asked my friend about the contents, he’d say “You got the million!”. After I ask my friend, I’ll prefer to take both, since there will no longer be this difference; neither action will any longer give me any better reason to believe my friend would say “You got the million!”.

Same as 1, same as 3b).

No, it would not be. Same as 1, same as 3b). Even if I know that having a certain question answered would cause my preferences to change, my preferences do not preemptively change. Just like in the gold box/cardboard example.

You tell me what exactly it is that causes you to say the future is indeterminate and the past is determinate, and then I can answer these questions appropriately. But, if it helps, let’s say that absolutely nothing in the current state of the universe gives any information about what the bit was before it was erased.

I don’t. How do you arrive at the conclusion that it’s not the case that what will happen, will happen? (You don’t.)

I’m not saying “The moon landing was real yesterday, but it’s fake today.”

I’m saying: Suppose there was absolutely nothing about the current state of the universe which could tell us if there was a moon landing in the past or not. Would “There was a moon landing in the past” still have a determinate truth value? And, if so, why do you not react the same way to a statement like “There will be a Mars colonization in the future”? What is it about the latter that makes it indeterminate while the former is determinate, if both are equally decidable/undecidable from the current state of the universe?

It took me a while to realize this was probably intentional. :slight_smile:

You don’t have to be deterministic to be somewhat predictable. There’s a middle ground between “I can say what X will do infallibly” and “I can’t say anything at all about what X will do.” There’s “I can make predictions about what X will do which have high probability of being true, although they aren’t infallible.” You can imagine a process about which you can justifiably say “Well, I can’t make any guarantees, but I’ll say it has probability 80% of coming out like this, and probability 5% of coming out like that, and …”. Perhaps my Xtaetaer is like that.

Not related at all? Then the predictor’s prediction is worthless. Do you believe the predictor’s predictions about the Xtaetaer have to be worthless if the Xtaetaer is not fully deterministic? I predict your reply to this post will be in English; if my prediction is better than worthless, does it mean you’re deterministic?

Alright, but just to avoid the red herring of determinism, what if you saw them being predicted with, say 90% accuracy over a long run? Impressive, but less than infallible. Could you say that person or thing is not deterministic, but still fairly predictable?

(What a coincidence, I am (no lie) rather a lot smarter than the average Joe as well. Not to be egotistical.)

Heh, I can see doing that myself. But what if [as I see you ponder below] you observed yourself taking the test and your actions accurately predicted many times? Like I said, maybe not with the red herring of infallible accuracy, but with significant accuracy nonetheless. Couldn’t you be not entirely deterministic, but still pretty predictable? (I mean, a lot of the things you do of your own volition are pretty predictable. Like I said, I can be reasonably sure you’ll reply to this in English, even though that’s an action you take under your own free will).

What if the choose is not entirely deterministic, but there’s good reason to think the predictor can do a rather good job at predicting them anyway?

Yeah, but can you be convinced that you are, though not entirely deterministic, fairly predictable anyway?

Ah. Well, right, this is the interesting scenario, then. To me, the essence of Newcomb’s paradox has always been where you actually observe the predictor making accurate claims about you yourself, so there’s no silly ducking out like “Well, I’m probably so much less predictable than those other peons”.

Your writing here seems to be agreement with me, though; the only thing I would say is that I wouldn’t require this non-independence to be infallible prediction, nor would I take the existence of this non-independence to necessarily show me to be deterministic.

Well, if the predictor has any reliability at all, then, by definition, your selections are in fact correlated to his predictions; this need not, however, demand complete determinism, and it need not demand infallible prediction, just some degree of reliability.

I do want to say that if one action puts me in a position where my expectations/confidence/whatever is higher than that resulting from another action, then it’s natural that I should prefer to take the former action.

But at the gain of improved credence in gaining the million dollars.

Your second sentence contradicts the first. begbert2 has convinced me.

If you agree that the contents of the box before you make the decision are the same as the contents after your decision, then your decision does not change your state relative to what’s in the box. You have not somehow placed yourself on the winning team. You were either already on it, or you weren’t. Period. You keep insisting that your decision to choose one box has some effect on you getting the mill. At that moment, that ship has already sailed.

It’s that simple. You stand before the boxes, and whatever is in there, is in there. We all seem to agree on that (of course we do, it’s a given). Your only choice, the only one that will affect your future at that point, is whether or not you leave an extra $1,000 on the table. Nothing else you do will affect, at that point, whether or not you’ll also get $1M.

I mentioned this before, but I think what gets people tied up in knots with this paradox is that the deck is stacked at the start. The predictor is not omniscient or infallible. There is also no “reverse causality”–i.e., your decision won’t make money appear or disappear. How do we know this? Because it’s a given in the paradox. In reality, the predictor behaves as if he is infallible (he is, well, never wrong). So you are asked to make a choice consistent with the facts as given, but out of sync with how those facts would actually manifest themselves in real life, based on our experience (fallible predictors aren’t right millions of times in a row, not for predictions of such events; such inerrant precision suggests some causality in the absence of omniscience).

Here’s an analogy, not exactly like the paradox but illustrative, I believe. I roll a die a million times, and every single time it comes up six. Every time. And, oh, by the way, in my analogy, the die is perfectly fair–every number, one through six, is equally likely on every roll of the die.

See? This is where it’s not a fair fight any more. I assert a given that is not consistent with how the die behaves in my analogy, except in a statistically possible but galactically remote way. But it’s a given just the same, because I offer it as such and it’s not impossible, however likely, in my hypothetical.

So, after I roll the die a million times, and the whole world has witnessed it, I offer the following wager: If someone bets $1 on six, I’ll give him $1M if six comes up. If he bets $1 on “not six” (i.e., anything else), I’ll give him $1M if anything other than six comes up. The next million people bet on six, and dontcha know it, six comes up every time for this next million rolls too. Every one of the people walk away with a cool mill.

Oh, and let me remind you that it’s a given in my hypothetical that the die is honest, every outcome equally likely. Why? 'Cause it’s my hypothetical and I say so.

Now it’s your turn to bet. Given the facts of the hypothetical, you should bet “not six.” It’s indisputable given the details of the scenario–there are five chances out of six that you’ll be a winner. But as human beings, intuitively this seems wrong. The actual sequence of events if so out of sync with the given facts that we reject those facts, even if subconsciously. There’s a little guy in our heads saying, “Who you gonna believe, your Probability & Statistics professor or your own friggin’ eyes? Honest die my ass, bet on six.”

I think that’s whats happening here. The givens in the paradox make the choice obvious–no reverse causality, the predictor is not omniscient. But there’s a little guy in our head saying, “Don’t smart yourself out of a million bucks over a thousand.” But that doesn’t change the fact that the givens in the paradox, such as they are, logically (perhaps not intuitively) lead us only to one conclusion, ISTM.

My second sentence doesn’t contradict the first. I didn’t say my decision caused the contents of the box to change; I said it affected my evidence for what was in the box (via my belief in the predictor’s general reliability). This is true; same as if a friend came and told me what he saw in the box, this wouldn’t cause the contents of the box to change, but it would change my evidence for what was in the box (via my belief in my friend’s general trustworthiness).

I also want to say, all the hang-ups about infallibility are red herrings. Nothing in the paradox depends on that, nothing in my approach depends on that. We can imagine that instead of always having gotten things right, the predictor has been observed to get things right about 90% of the time (both when he puts the million in and when he leaves it out). My approach remains the same.

This is silly. Your predictor’s reliability is moot at that point, relative to your choice.

I can see we’ll never see eye to eye on this. Your conclusion is completely illogical. If you believe that your choice of one box somehow creates “evidence” for a proper selection, whereas selecting one box creates “evidence” for making a bad choice, when in either instance, THE OUTCOME IS ALREADY DECIDED–okee-dokee, it’s up to you.

This is probably in some way inappropriate for me to say, but it strikes me that in all the discussions about Newcomb I’ve ever had, in every case that I remember, if someone became to some extent belligerent or at least petulent, that person was a two-boxer.

Just sayin’ is all. :smiley:

Sorry, I know I’m not helping.

You guys need to let the grace of the all powerful alien come into your life and change you from within.

-FrL-

I confused.

Don’t you think that it is already decided whether the mess in my kids’ room was actually made by Jacob or rather by Ella? And don’t you think there are certain things that Jacob could do which would create evidence that Jacob made the mess? And aren’t there certain things Ella could do which would create evidence that Ella made the mess?

In that case, we have a case in which “THE OUTCOME IS ALREADY DECIDED,” and yet in which it is possible for people to create evidence as to how I ought to react to the mess.

I don’t see the problem with this. But you seem to be saying that once THE OUTCOME IS ALREADY DECIDED, it is impossible for evidence to be created regarding that outcome. Color me :confused:

-FrL-