Do you take one box or two boxes?

And my daughter complains that i am annoyed with plot inconsistencies in movies that aren’t about plot. And with implausible fake stuff in movies, too, for that matter. This is who i am. I have no faith in this setup.

I’d have no faith in the setup if anything were asked of me, especially money or the like. I’d even question it if I were asked to chop off a finger, because it’s just as plausible that there are actually people who love giving away a million dollars as it is that there are people who just get off on making people mutilate themselves.

On the other hand, if the people provably received the money with no string attached, I’d one box.

Yep, I just don’t have any faith in it, either. If I can’t see how it works, then I assume it’s going to peg me to be a two boxer. That’s how I see myself, at least. But on that morning I might be more interested in testing the computer than having it test me. So I might just grab the one box. If I can’t figure which I’d do in advance, I can’t see how the computer could to it without knowing a lot more about the process it’s running.

After all, everyone who was in the run of 1000 could have just been easy to read for whatever reason.

Why? Maybe what the computer has is teleportation. It puts the million dollars in the box and then if you pick up both boxes, it teleports the million dollars back out of the box. Or maybe it teleports the million dollars into the box after you choose the single box.

But I feel this is a side issue. Whatever method the computer is using, the evidence of the thousand previous guessers had demonstrated it is a statistically accurate method.

That’s the only thing that would make me think that two boxing is not implausibly unlikely to succeed. You’d have to have the computer be just pretty darn good good at guessing psychologically, but randomly pick a lot of people who never second guess themselves. That’s still unlikely, but more likely than a 2^1000 chance that it guessed correctly before even when people overanalyze the situation.

So if you are presented with some non-Euclidean geometry do you reject it because you don’t trust the postulates? The set up of the hypothetical is axiomatically true.

You don’t know that, and it doesn’t matter for the EV of 9-vs-10 fingers. All you know is that of the people who opened box B and found one million dollars, 90% had less than 10 fingers. So the expected EV of opening the box with 9 fingers is FAR higher than opening the box with 10 fingers

And there are many things you can do to convince them of that. Just make irrational choices publicly and often (don’t file expense reports, don’t cash checks, leave all your change on the counter, etc.) all of those might convince the computer you are a one boxer. But one thing that will never convince them you are a one boxer is taking one box, unless you have already taken part in this experiment multiple times (which I assume you’d have mentioned :wink: )

Perhaps if I’d never been exposed to non-euclidean geometry i would be. But many non-euclidean geometries can be described in intuitively satisfying ways, such as the surface is a sphere. This set up is toying with causality and correlation in a very unintuitive and unsatisfying way.

And it just screams “real world scam”, despite the claim that it’s legit. And actually, the original set up is a little vague as to what you know for sure and what the computer tells you. Just because it has actually, for real, been right doesn’t mean that you, walking into this room, have reason to believe that:

As i said above, i think there are a lot of unstated assumptions in this question.

Yes, if this situation is completely normal in the world i live in, and i know the computer always guesses right, and always gives out the money, then i may treat it like an ATM where i know that if I give it this piece of secret information or will give me cash, or like a store where i know that if i give them money i can walk away with food, i may routinely take the closed box. But in the world i live in, money lying around in boxes belongs to someone, computers don’t a priori have the right to give away, and this set up looks like a complex scam that i haven’t figured out.

One thing that has been brought up is to, perhaps, make a random choice, since there is no way a computer could predict the outcome, assuming it does not cheat, even if it predicted someone would do that. However, humans are not necessarily good at making quick “random” choices, so the idea is that a supercomputer is still going to get it right more than half the time in those cases (again, the hypothetical is simply that you have watched this trick be done, say, 100 times, and the computer got it right every time).

So the computer would have no choice but to murder you before you made your choice, to maintain its 100% record.

In fact maybe that’s how the computer keeps its 100% record. The moment you reach for a box that doesn’t match its prediction, whamo it’s time to crush this puny human into a thin pulp.

I’ve now bothered to read how those in the philosophy departments handle this and found this post, which is good walk through:

Some of us, not me are “causal decision theory” (CDT) people. Others are in my “evidence decision theory” (EDT) camp. But there are others too!

It even includes a detailed analysis of the coin flip strategy @dprk and says the best is a slightly unfair coin that comes up one box slightly more often

At , the predictor leaves Box B empty and you average $510. At , the predictor fills Box B and you average $1,000,490. A 2% shift in coin bias produces a $999,980 jump in expected payoff.

The optimal mixed strategy is just above : one-box slightly more than half the time. This earns approximately $1,000,500, which beats pure one-boxing ($1,000,000) by $500. You get the million (the predictor fills Box B because you’re majority one-box) while occasionally grabbing the extra $1,000 when the coin lands on two-box …

For those us who have gotten intrigued by the strength of positions held it is a good read!

The quote left out the probabilities.:

At , p= 0.49 the predictor leaves Box B empty and you average $510. At , p= 0.51 the predictor fills Box B and you average $1,000,490. A 2% shift in coin bias produces a $999,980 jump in expected payoff.

Though there are two big problems with that…

  • It assumes that a computer who is quite good at picking one boxers (who are let’s face it are obvious marks :wink: ) is as good at picking the specific type of coin the coin tossers choose. Which is not a good assumption seeing as we know they have not encountered any coin tossers (or hardly any as they have no failed predictions)
  • It again goes back for the finger cutting off EV. As long as you choose the type of coin after you walk in the room, it cannot effect the actual outcome, anymore than cutting of your finger can. The EV is completely spurious and is a meaningless number that makes no sense to use as a decision making tool.

Yes the thought problem is set up that “the predictor” (supercomputer, apparently often referenced as “Omega”, “demon”, whichever) is very very very good at predicting what the player will do. It cannot control random events though; no precognition. So it will have correctly predicted that the player will choose the 51% biased coin flip approach, and then maximize its share of being right for that player.

The thought problem doesn’t necessarily explain how, although the philosophy articles get into a difference between solutions that are based on it running a perfect model of you in advance, vs my having its recognizing pattern (those from this town always do this and that town always do that).

It’s a thought problem; not the real world.

In the specific of this thought problem it doesn’t. Again no set of numbers you come up with would outperform being someone who is a single box chooser.

Note though: the rules of the thought experiment are set up to have the population of players who use CDT consistently get less than the group that uses EDT. Really the predictor only really needs to know if someone uses CDT or EDT as their default algorithm to be make a very reliable set of predictions! People whose cognitive process is CDT will be two boxers and those who use EDT will be one boxers. The predictor will predict that and those who use EDT will all get the million and those who use CDT will get a thousand.

The counter problem in those philosophy articles used to allegedly demonstrate the failure of EDT, which is what you are trying to create, is usually “the smoking lesion problem” - correlation is not causation:

All of these thought experiments are contrived though. They accept axiomatically that which is unknown in any real world.

Accepting that the predictor is not a scam would be something most in any real world would be resistant to do, without extraordinary proof of that extraordinary claim. The problem though demands that as set up.

A strong correlation of smoking with lung cancer would be (was) neither neither accepted as proof on that basis alone, nor easily confidently dismissed by the simultaneous presence of a genetic marker.

I’m not sure how much these test the types of decision theory?

It seems clear that none of this discussion has changed which camp anybody is in. (please do correct me if I am wrong.) But I wonder if we have changed our view of the other side? I have; this is what two-boxers sound like to me now:

“I would rather lose $1m and receive only $1k, rather than run the risk of being a schlemiel who walked away with $1m when they could have had $1.001m.”

Yes it could…..

Let’s say the numbers we are given is the predictor is right 75% of the time. But 99% of the people who found 1 million dollars in box B had less than 10 fingers.

EV of Box B for a one boxer….

0.75 * 1000000 = 750000

EV of Box B for a two boxer….

0.25 *1000000 = 250000

EV of Box B for a 9 fingered opener ..

0.99 * 1000000 = 990000

EV of Box B for a 10 fingered opener…

0.01 * 1000000 = 1000

Cutting off your finger is a far better way of increasing the expected value of Box B than choosing one box.

That’s not my perspective. I think that my pretending here that I’d trust the setup and take one box wouldn’t actually change who i am and how i make decisions. If this really happened, there would be other information and maybe I’d become a one boxer. Or maybe i wouldn’t. But i don’t think that trying to trick the imaginary computer in the imaginary scenario by claiming you really really believe in it has any value. I think a lot of people here don’t actually know what they’d do, and that in fact, what you’d do depends on a lot of stuff that’s not given in the problem, including stupid physical conditions like how much sleep you had the night before and how hungry you were when you looked at the boxes.

So that’s a good comparison as it shows how wrong the EV calculation is. The original counter argument to smoking causing cancer was the whole “type A and type B personality”. The argument was the kind of people who smoke a lot are “type A” (the term was invented for this argument). They have cancer and smoke because they are driven high stress people, there is no causal relationship.

That was debunked by showing, statistically, the relationship was (very likely) casual not just correlation. But in this example we know it’s not casual. Choosing to take box A has no casual relationship to being a “two boxer type”. Lots of things might be but we absolutely know that taking box A in the future cannot cause you to be predicted as a two boxer in the past.