Are you a One-Boxer or a Two-Boxer?

In the problem in its classic form, the one that ties Philosophers into knots, this is definitively not the rule.

Exactly. I think most people in this thread are badly misunderstanding the problem unfortunately. The predictor doesn’t have to be perfect. It’s just that the predictor is good at predicting. That’s all. All you need is for him to be “probably right.”

I need some help understanding this…

If a situation with a better-than-average predictor runs into the same problems as one with a 100% perfect predictor, in what sense are they different?

Or are they equivalent? And if they are, can’t you also state the problem as the predictor being perfect?

Does a 51% predictor run into the same issues with causality, determinism and paradoxes as a 100% predictor?

That does seem a bit absurd to me, so I suspect that it’s not the case. But I need someone to explain to me why not.

I think it does, because think of what we are saying that he is good at 51% of the time: mind reading. Let’s say that I am a really good judge of character. I say that you are a conservative type who would likely pick one box. ** Mr. Shine** is a “you only live once guy” so I think he will pick both boxes.

But I submit that no mere mortal can read inner thoughts to get better than 50/50 at predicting, not what your first instinct is, but what you will ultimately do. Hell, we all don’t know ourselves what we would ultimately do with a supposed 51% predictor. I think the idea of a 51% predictor is impossible in our universe.

I also think that’s what infiltrates the two box logic: They just can’t fathom that they can’t outsmart the guy.

The problem I have is that, using the logic of the two boxers, there’s no reason for the predictor to be even better than chance, assuming your utility value of getting $1 million is greater than directly proportional to your utility value of getting $1000–which it probably is–and your utility value of getting $1,001,000 is not much higher than getting $1,000,000–which, again, is probably true.

That’s where I see a paradox. At the level random chance, the one boxer logic finally holds. The predictor’s choice has absolutely nothing to do with my choice, so X is an independent variable, and the logic holds that X + $1000 is better than just X. So you have two bits of logic that both seem to hold. Which is correct?

There’s no third choice, either pick AB or pick B.

Not “Pick B, but then grab box A on the way out for an easy extra $1k”. That’s the same as pick AB.

It’s a bit bogus to think the prediction will usually be wrong, and moreover usually wrong in a way that just happens to favor your strategy; especially if the prediction is so good that it’s “uranium atoms will decay, sun will rise again tomorrow” type accuracy.

I’d need to know more about the nature of the predictor before answering this. Let’s say that this experiment is in a game show setting, and the predictor is really some sort of super-scientific device that takes may brain-state as input, and then outputs a prediction. My brain gets scanned, and it determines that I’m a one-boxer. If I were to make the decision right then, I would only choose the opaque box. But suppose sometime between the prediction and my choosing, someone in the audience screams: “Pick both! You’ll always make more money regardless of the prediction!” I fully intended to only pick the opaque box originally – heck, maybe I would have never realized the two-boxer logic on my own – but some external influence, which the predictor could not have possibly known about, has changed my mind. It seems like the best option is to be a one-boxer going in, have a revelation, and be a two-boxer coming out – not possible for me anymore, though.

You and the two boxers want to keep fighting it. The Predictor is the perfect predictor. It must be assumed.

I swear, I don’t understand why two boxers want to fight this point. Yes, it is impossible, but it is part of the puzzle. So don’t fight the hypo.

In IMHO, if a poster says, “I had an argument with my wife last night…” nobody comes in and says, “What if instead of an argument, you meant sex and instead of wife you mean illicit lover? You are a piece of trash!!!”

We don’t fight it there. Why fight it here?

Nobody is fighting the hypothetical.

If I can’t outsmart the guy and the decision was already made, then why are we still pretending like I have free will? Like I can do anything but choose two boxes? I have to. It’s predetermined. No amount of logic can sway me from my destiny.

Why would that conflict with free will? The decision you made was a result of the processes within your own head. The Predictor might have been able to guess those processes, but he did not influence them. You made the decision yourself. That looks like free will to me.

Put it another way: I had honey nut flakes for breakfast this morning. I did so because that was my decision. It is now known for certain what I had for breakfast this morning. There is no scenario, going forward, where it is not the case that I had honey nut flakes for breakfast on the morning of 22 Aug 2013. Does this certainty mean that my decision to have that cereal lacked free will? If you say that it’s different because the event was in the past, why? How is that different?

Free will doesn’t mean what most people think it means. It can’t, because most people don’t actually know what they think free will means.

Better yet. I’m sitting at a red light during rush hour. Do you believe I will sit at the light until it turns green? Or will I hit the gas and rush through the busy intersection?

With > 95% probability, the predictor will predict I will sit at the intersection, even if I was offered $1 million to run through it. That doesn’t rob me of my free will, either.

If the predictor is magic under our current understanding of physics (time traveling, perfect brain scan + no free will, etc), then one boxing is correct.

If the predictor is a smart guy with an 80% success rate, then two boxing is correct. Because in the case of just a smart guy, your actual decision doesn’t and can’t change what his prediction was.

Your “actual decision” and his “prediction” are identical things. Two boxers treat them as if they are different. If the guy has an 80% success rate, then one boxing is correct.

No, by smart guy with an 80% success rate, I mean a non-magical, non-time-traveling regular guy who examines the evidence and makes decisions the same way you or I do. He’s just very good at it.

I have a very high success rate predicting whether Great Antibob will run out into traffic. My success rate is 99.9%, because I predict he won’t every time. If he wanted me to make a prediction right now that he will run into traffic tomorrow, him actually running into traffic tomorrow would be a terrible way to do that.

I’m a one boxer, but I think I’d pick box A just to goof things up.

If you still 2-box, fine. Maybe you’re just that kind of person and the predictor realizes it.

But that’s not the maximizing solution. If you wish to maximize your winnings under such conditions, the solution is still to 1-box.

Due to lack of magic and time traveling, my decision of which box to take has zero effect on what the smart guy predicted. An 80% success rate doesn’t mean taking one box causes you to win $1 million 80% of the time.

The smart guy has some variable or combination of variables that highly correspond to one boxing. It is desirable to have those variables. Those variables may force/cause you to take one box. If they do, that’s fine. Being thought to one box and taking one box is close to optimal. But taking one box doesn’t cause you to be thought of as a one boxer ahead of time.

I’ll grant it doesn’t if you aren’t planning to maximize your winnings ahead of time.

But if you aren’t going for the money, we’ve strayed an awfully long way from the point of the exercise.

This was confusing me as well. An 80% success rate intuitively sounds like something that is possible in the real world. However, I think **jtgain **is right. For this particular prediction, an 80% success rate - atually, any success rate over 50% - is just as magical as a 100% success rate.

The predictor can’t just be a “smart guy”. He can’t do better than random chance without using magic, you having no free will, or by way of retrocausality.