Newcomb's Paradox

There can only be future facts if the future is fixed. Since I do not accept that the future is fixed, I do not accept that there are future facts. That should be simple enough to understand.

And sure, people use probabilities as a method of making educated guesses about past facts all the time. This is a way of taking your limited evidence and stretching it as far as you can go. However, surely you’ll agree that if you add enough evidence, and the right evidence, to your arsenal, all estimates about past events will wind up at their proper values of 0 or 100%. Since, of course, there’s not really an x% chance that Jim has an ace. He either has it, or he doesn’t. Unvaryingly and for certain. That’s what we call a fact.

I believe the past is fixed because I have memories of the past, and none of those memories include past facts changing willy-nilly. The stories I’ve heard from other people have also not led me to believe that past facts change. And, since I don’t have such an ego as to think that changing-fact-events are afraid of my species, it’s not reasonable for me to assume that past facts can change, even when I’m not looking.

I have no memories of the future. In fact, it seems like I have the ability to effect the immediate future, quite directly. If I throw a ball at a target now, sometime in the near future a ball will bounce off a wall next to the target. If I elect not to throw the ball, it refrains from missing the target. If I actually do have that choice, then the future is not fixed.

Now, I’m a computer scientist enough to know that we could in fact be causally deterministic and still probably feel like I was making decisions. I also am well-read enough to know that we all might be figments of the Red King’s dream. However, their really isn’t any kind of solid evidence for either of those theories, so I revert to the the default belief that I actually have the choice that I appear to have. If you want to change that, bring evidence to the contrary.

Because, unlike the past event, the future event hasn’t happened yet, and might not turn out as you predict.

Memory
All alone in the moonlight
I can smile at the old days
I was beautiful then
. . .
When the dawn comes
Tonight will be a memory too
And a new day will begin

And no, it is not conceptually possible that I will grant that I have memories of the future.

And we don’t know what future is coming, any more than we know the past beyond the limits of universal memory. The latter because it’s forgotten, and the former because it hasn’t happened yet to be known.

Yes, but look at that book again. Close it first. Put it down. Every word in the story is “now”, which renders the term nigh meaningless. At all times to the characters in the book all the pages are “now”. Why are we not experiencing everything at once? Why do we experience time at an approximately constant, forward-moving rate?

If you want to go back to the film example and point out that at every instant that the movie’s being played, only one frame a time is showing at a contant rate in a forward direction, but then one has to ask who or what is playing the movie? Who is actually causing the point of “now” to move steadily down the pre-existent timeline?

The “history is being created now” model just has the timeline growing at one end, getting longer and recording the past in stone as it goes, and “now” is just the active creating end. This seems at least as simple and more consistent my my prior observation that we appear to be able to effect the future. So this is the theory I choose to run with.

It’s a long definition, be we have a nice short word that means the same thing, “Fixed”, that you can use when you’re in a hurry.

And when I say “already defined” I mean “is defined* currently”, which of course includes all things that were defined** at previous points in time, since the past doesn’t unhappen once it’s happened.

  • this is the adjective form meaning “has a definition”
    ** this is the verb form meaning “was given a definition”

All points in time will eventually be fixed, so all points in time already are, and always have been, blixed. So the word doesn’t really tell you anything about points in time; everything’s blixed, no matter what, and that fact tells you precisely nothing about the moments in question.

The thing about being blixed is, anything that’s blixed but not also fixed could happen a variety of different ways. You can’t [perfectly] accurately predict things that are only blixed. So I don’t see how that term’s going help you distinguish anything about the Paradox.

Now, being fixed: if your future is fixed, then you don’t really have a choice at all. When the predictor made his prediction and filled the box, he unavoidably did so in alignment with the non-choice you’re going to carry out, so if he put the million in, you can’t non-choose the thousand as well, and if he left the box empty, you can’t non-choose just the one box and get nothing. Your non-choice only has the two possible outcomes, and what your non-choice would actually be was decided before you were even born. You certainly can’t change it, even if the alternative would be the better choice.

If your future is not fixed, then you do have the option of choosing to taking both boxes, regardless of what they contain. Which will get you more money. Which you should therefore choose to do.

See? A relevent difference. Is there such a noteworthy distinction between a blixed future and a, well, actually, all futures are blixed, so and nothing.

Statements about the future do not yet have a truth value, since they haven’t happened yet, unless you’re going with the booklike “everything is already written” predetermination. In the scenario of causal determination, the truth of that statement is not technically defined, but it can be predicted in advance to a 0% or 100% probability, at which the lines between “fact” and “certain probability” begin to blur. (If your future is not deterministic, of course, you often can’t even get a 0% or 100% probability out of it.)

That should have meant “does not hold for the future”. (As the number of words I type increasingly exceeds ten the odds that I make an error asymptotically approach 1.)

I’m saying that “is pretty similar to the state at time T” does not describe a single fixed state. Lots of different theoretical states could be “pretty similar” to the state at time T. Is it isn’t determined exactly which one of those theoretical states it’ll be at T+epsilon, then the state at time T+epsilon is not fixed, by my definition.

Well, the presence of infallible predictors would be evidence for a static timeline. Such evidence is not to be found. This on its own is not an ironclad case against a static timeline by any measure, but it factors into the overall decision, much like how the total absence of unicorn hoofprints in my living room retards the believe that there’s an IPU in there.

My answers:

For practical purposes, statements about the future have truth values iff the the future is predetermined with absolute certainty. In the alternative case, statements about the future have probabilities. In all cases, statements about the past have truth values.

One cannot have free will if one’s actions are 100% reliably predictable.

If you aren’t predetermined, then you have the choice of taking both boxes, and therefore always should, if you like money more than transient good feelings.

Note that the predictor being fallible does not necessarily mean that you are not deterministic or that you have free will. It just means that at times, you will be unavoidably fated to make a non-choice that gets you the $0 or the $1001000.

I don’t care about non-reliable predictions. As far as I’m concerned they change the problem not one iota from predictions that are completely random. You see, choosing both boxes is always the better choice, if you like money more than transient good feelings. The only factor that determinism plays is whether you will have the choice to make the better choice.

What goes wrong is I refuse to use the phrase “reverse determinism” for any reason, since I think it’s a very, very poor phrase, arguably downright stupid. I am perfectly willing to state that the past has been “determined”, just like a predetermined future would be. But it was not “reverse determined”, since the determination of it did not spread backwards from the present time. It spread forwards from the start of time, in the direction of the present time. And the thing that “determined” it was that the “now” point reached and crossed those past times, “determining” what actually happened.

I use the term “determined” equivalently to my shorter term “fixed”, so by definition, if the statements about the future have truth values, then they’re predetermined. (The “pre” means they were “determined” even before the usual determining agent, the “now”, got to them.) So, by definition, if you believe in a fixed future, you’re a determinist, at least in my book.

I am fully aware that all argument forms that rely on symmetry in their premises that isn’t there become invalid when you swap things around in the premises and render them false. (They’re still sound, not that that matters.) However I hadn’t noticed that there were that many things that were so reliably swappable that swappability should be the default assumption. The pure premiseless argument forms are only half the battle, you know. Possibly less.

The temporal asymmetry in my worldview springs from the asymmetric facts that I have memories of the past and none of the future, and my present actions tend to seem to turn out to have a causal effect on the future, and no effect whatsoever on the past. Bingo, the cause is identified. I hadn’t thought it was that subtle, really. I’ve been openly asserting that the past and future are asymmetric in properties for the entire time since I openly marveled that a living human being (you) seemed to think that they weren’t.

But, I’ll investigate futher. Do you lack memories of the past? Do you have memories of the future? When you do something in the present, does it seem to have direct effects on your memories of past events, in a seemingly causal way? When you do things in the present, do they tend to have no causal effect whatsoever on the future?

I’ll reply to the post in full in a little while. However, you do note, at the end, that your perspective is grounded in certain empirical observations you have made. Great. But if it’s just a matter of what the empirical evidence supports, then how come no number, however high, of observations of the predictor maintaining a 90% accuracy rate when playing the game with you lead you to modify your perspective? (To account for and accept that, almost always, when you take both boxes, it turns out that the predictor previously left out the million, while almost always, when you take just one, it turns out that the predictor previously put in the million?) I mean, you’ve noted that your perspective is grounded in part in your never having come across such reliable predictors; however, with enough exposure to them, wouldn’t it be logical to change your perspective?

I, too, would have difficulty accepting the existence of and ramifications of these talents of the predictor, at first, but with enough evidence, I would eventually come to that conclusion and adjust my behavior accordingly. Why wouldn’t you?

(Well, maybe you would. You did indicate something like that before. But then you said, in response to questions about how this was compatible with your disagreement with the one-boxers:

I’m not sure what you mean by this. Could you clarify?)

(ETA, but outside the edit window: Never mind, I think I see. You’re saying you wouldn’t have a choice, that there would be only one thing you could possibly say, in light of the predictor being so good that you apparently lack free choice. Well, like I keep saying, if it helps your perspective, take the predictor 90% successful instead of infallible; both options on the table remain possibilities for you, whatever the predictor says, but there’s still this amazing accuracy to his predictions.)

Simply because, once the boxes are on the table, their contents have already been decided, and nothing I can possibly do can increase my odds that one of them contains the million. This is an undisputed fact, stated explicitly in the scenario description, isn’t it?

Sure, trust in the predictor makes throwing away the thousand seem like a good move, however the good feelings I might get by trusting in the predictor do not actually increase the odds that the box is full or empty. I mean, I can calculate odds for the box all day, coming up with answer after answer, but regardless of what I estimate about the contents of the box, the actual contents of the box will be what they always were, perfectly unchanging and completely independent of my “feel good” predictions.

Because the contents of the box are going to ignore the statistical implications of my choice, there’s no point in basing my selection on an attempt to increase those statistical implications. The only thing there’s a point in doing is trying to get more actual money, and there’s more actual money in the two boxes than there is in the one.
By the way, going waaay back to the x stuff for a moment, which at last count was being defined to “I know x is true”. I’ve been thinking; isn’t every statement in a valid logical proof known to be true by the arguer? So, presuming that the “I” in the definition of is the same person as the one doing the proof, doesn’t A = A hold true for all statements A?

Well, no, “A <–> A” is not a generally valid theorem, or even the weaker claim “A → A”. It fails to be a generally valid theorem because it’s clearly possible for things to be true without my knowing them to be true. As an example of what goes wrong when you assume this, consider the case where there’s a prize behind either door A or door B; in this case, we’re making the assumption A v B. If A = A and B = B, then we can substitute in to conclude A v B. But this is saying that I know which door it is behind, which we should not be able to conclude.

As for your thought, the way it happens is that if you can prove A without using any premises, then you can prove A as well. But often you will, in some fashion or another, take A as a premise without taking A as a premise, and you will not be able to derive A simply from having A around. The assumption that A is true doesn’t imply that I know A to be true, even if I’m currently working out the logical implications of that assumption. Assuming “A is true” is strictly weaker than assuming “A is true and I know it to be so”; for example, I can coherently imagine that, say, I was adopted but that I am unaware of this fact, or that the prize is behind Door B but I am unaware of this fact, or that my house is on fire, or P = NP …, or various such things. In working out the implications of “I was adopted”, I don’t automatically get the right to assert “I know that I was adopted”; if I want to assert that, then I have to take it as an extra assumption. I can reasonably sketch out an argument for “If my house is on fire, then my books are probably damaged”. I’ll have much less luck sketching out an argument for “If my house is on fire, then I am probably frantically making plans for recovering from the damage.” I might well be off on blissfully ignorant vacation. Just because there’s a premise “My house is on fire” at some point in the argument, we don’t get the right to treat it as or infer “I know my house is on fire”.

The actual way the propositional logic of works, minimally, is the following system (called K): you take all the normal rules of propositional logic, and add the rule “If B can be concluded from the premises A1, …, An alone, then B can be concluded from the premises A1, …, An alone”.

That’s a pretty weak system, so you generally augment it with other things, depending on your interpretation of . Reasonably, we can add a rule that what is known is true: for all A, A -> A. This gets us a system called T. Then, we can add the rule of positive introspection (if I know A, then I know that I know A) to get the system S4: for all A, A -> A. Finally, we may want to add a rule of negative introspection (if I don’t know A, then I know that I don’t know A) to get the system S5: for all A, ~A -> ~A.

Those are the most commonly studied systems, and note that in none of them can you prove “for all A, A -> A”, which would trivialize them with essentially the assumption of omniscience.

In short: Every line in a proof which is premiselessly justifiable is indeed assumed to be known by a logically ideal agent. But much of what is in a proof is usually not premiselessly justifiable, and just because some premises give you A, it doesn’t follow that those same premises give you A.

Sure, but in all of this you’re expressly distinguising the “I” in the “I know x is true” from the individual making the argument. When you make an argument, as you make the argument, knowledge is added to your knowledge base about the statements in the argument. You’d think that self-referencing factor would have an effect, or at least cause a logician somewhere to get a migrane or something. Ah well, this isn’t really that important to my Newcomb argument anyway (since I subsequently figured out the more correct way to incorprate your objection into my argument); I was just curious if the self-referential effect of speaking of the arguer’s knowledge within the argument about the argument was believed to make a difference.

Yeah, I kind of understand where you’re coming from (and also that you don’t mean this to be directly important to your Newcomb argument). But if you really want to make every “step” in your argument satisfy A = A, then all you need to do is make all your premises of the form B, in a suitable logic like S4. In taking as a premise B instead of B, you’re sort of explicitly saying “I only want to explore the consequences of B being true; if I had wanted to explore the consequences of myself also knowing B to be true, I’d have taken B as a premise instead”.

Well, okay. Though it occurs to me that, while a proof about “what Mr. Indistinguishable would do if presented with this problem” cannot be resolved with cases, I think a proof about what a person should do when presented with the boxes can be resolved, since from that standpoint you don’t have to frame it in terms of ignorance prior to doing the cases; you can just frame it in terms of “this is what the possible scenarios are”, which would lack the s.

i would consider how much i wanted $1000 vs. $1million. if for instance i really needed $1000 i would take both boxes (guaranteed at least $1000). if i didn;t care at all about the $1000, but did want the $1million i;d go for b2 just to see what happens. if i didn;t even care about the $1million i;d go for both boxes again just to check it all out. in no case would i bother caring whether or not some guy made the right prediction a long time ago.

seems that everyone is getting too theoretical because they don;t have the real boxes there. given the prospect of real money, i think most people would ignore the theories and jsut use common sense.

Sure, but the question is, what’s sensible?

People’s common sense differs. Some people’s common sense is “if the contents of the boxes are locked in, then more boxes = more money”. Some people’s common sense is “if taking one box has been shown repeatedly to generally result in more money than taking two boxes, then choosing to take one box is generally better than choosing to take two boxes”. Some people’s common senses are combinations and confusion and attempts at reconciliation and “You know, I don’t really know”.

I don’t see why you can say you’d “go for b2 just to see what happens” without “caring whether or not some guy made the right prediction a long time ago.” What could be motivating your curiosity regarding the one box solution if not curiosity as to the accuracy of the predictor? The only reason to think that taking two boxes lowers your chances at getting the million is because of concerns regarding the accuracy of the predictor.

ETA: Yeah, what begbert2 said.