Why would anybody not take both?
My cats approve of this message.
If the computer predicted this line of thought, then there is in fact a cost to hoping for an error in the prediction.
True, if you completely randomize your decision the computer may be able to predict that you will do that but not what random decision you’d make.
However, I don’t see the upside to doing this. I’d rather just take the one box and rely on the computer to know that. Managing to trick the computer is nice, but it doesn’t actually increase my odds of getting $1,000,000 compared to just taking one box.
I think that this is a better answer than “two boxes” because people who pick two boxes are fighting the hypothetical in this way but don’t see it that way, so they say “two boxes”; you’re clear about what part of the scenario you reject as possible which is much more explanatory.
It can’t. But if your choice was determined by everything else that had happened to you up to that point, like your upbringing and your brain chemistry in the moment when you’re asked to make a decision, then theoretically a sufficiently advanced computer could simulate what you would choose in that scenario, and then make a prediction based on that knowledge.
How many boxes do you take afterwards?
What’s funny is that I’m equally baffled by anyone taking more than one box.
As @Mops said, the whole scenario is bunk. The computer is not clairvoyant. It cannot have a past prediction score of 1000 correct zero incorrect. So it’s lying to you or whoever told you it was perfect was lying to you.
Even if it had somehow been monitoring my every decision and action since birth it can’t predict with 100% accuracy what I’ll do. Human behavior is much more chaotic than clockwork universe.
This is also a variation on the classic “infinite regress” type of problem. “If I know that it knows that I know that it knows …” where depending on whether you trace back an odd or even number of steps the better answer is A or B.
With the final result that although you have to choose, you have no basis to decide. Hence @Der_Trihs’s wise idea to simply flip a coin and abide by that outcome.
Anyone thinking there’s a right answer is wrong. IMO.
So is this just a “there is no free will” argument dressed up like a probability logic puzzle?
If a computer can really make a 100% accurate prediction of any single individual person’s behavior (not an aggregate probability based on a population of humans’ behaviors) then you are either in sci-fi time travel / precognition terrain or you are arguing that there is no “non-predicatable” free choice.
I guess I’m a bit confused, though. As framed can’t I just take the single opaque box and get the $1M? It will know that’s what I’m going to do, so I will get the money. Or are the choices “take the open box or take both boxes”?
I think it’s more a question of which logical system you’re using to derive your argument based on some baseline assumptions that no one ever thinks about without these sorts of really detailed hypotheticals.
That’s my read, but I’m a one-boxer.
I think knowing the potential outcomes ahead of time spoils the game a bit. Maybe it would have been better to spoiler them. Assuming I don’t already know the potential outcomes when I make my choice:
- $1000 is nice. If I’m in a pinch that week, I might be inclined to take it. It’s also a small enough sum that I can usually afford to lose it if there’s a decent chance at a better payoff.
- I know that I’m being tested (sort of akin to the Marshmallow Test), and I know that there’s a 50% chance that the opaque box holds something better than the $1000 (otherwise it wouldn’t be much of a test. Unless there’s a dead cat in it, then it’s a different kind of test.)
I’m taking the opaque box and living with the consequences. Kind of like walking into a casino and putting $1000 on black at the roulette table. As it turns out - with a much better payoff.
It does not have to be “100% accurate”. Say it is “99% accurate”. Then you might reason that the expected value of taking one box is greater. But, wait, then the supercomputer knew you were going to think that…
A birds in the hand is worth two birds in the bush. I’d go for the golden egg instead of the pig in a poke.
Isn’t the expected value of taking the single opaque box $1M? The only way I get nothing in that scenario is if the computer predicted I would take both boxes. But why would I do that (and, more to the point, why would the computer predict I would do that)? Just for the extra $1K? Surely a smart computer knows that isn’t worth the risk.
It comes down to how the computer predicts, and how accurate that prediction really is. Even if it’s only 50% accurate taking the single opaque box is the right choice ($500k EV).
I’m sticking with the argument that the computer is always right. And if it’s always right then you should take the opaque box and enjoy your $1M.
I think you’re right for the wrong reason. There is no expected value of the opaque box. All you know is that it could contain $1000 or more, or it could contain less than $1000. Taking the opaque box is a 50/50 bet.
The surface-deep analysis, without considering how the computer predicts what you will do, is that the expected value of taking the single box is, say, 99% × $1000000 = $990000. So, rationally, you would choose that over the alternative. On the other hand, rationally, two boxes cannot contain less money than one…
no box-proxy for me …
I go straight for the $$$$$
I think I must be missing something.
My options are to leave the room with either:
- One box, containing $1000
- One box, containing either $0 or $1,000,000
- Both boxes
… and there are no additional consequences for any option?
Why would you not take both?
It seems like there’s a fallacy in the belief that by being a person who only takes one box today it will have had an impact on what the computer predicted yesterday.
I think people are focusing too much on the computer.
This, exactly. There’s probably nothing in it, but there’s no reason not to, and I have a friend who often ships things and can probably use the box.
How does it do that? All it does is imagine that people are that predictable. It doesn’t actually create a computer that can perform the prediction.
Apparently we have people on this board who think $1000 isn’t worth picking up and carrying away.
I wish they’d send it to me. I’ll send them back the postage.
Well, it depends on your priors I guess (hence the apparently paradox). If you assume, like I am, that the consequences are deterministic, then these are the choices:
One box, containing $1k
One box, containing $1M
Two boxes, containing $1k
This assumes that the computer is always right, which is how I read the problem. That is, whatever you chose is what the computer will have predicted you to have chosen.
If you assume that the computer is guessing (i.e. whether you chose one or two boxes is 50/50 to match the computer) then you have:
One box, containing $1k
One box, containing (on average) $500k
Two boxes, containing (on average) $501k
You can adjust the second and third numbers based on however you want to assign the odds of the computer being “right”.
Hmmmm.
Maybe I will toss a coin.
Is this exercise related to the “three door Monte Hall” thing? Because I never understood that either.
No, not really. The probabilities of the Monty Hall problem are much cleaner (assuming it is stated correctly). MH knows where the goats are for sure, your decision doesn’t require any special knowledge. In this problem there’s the issue of whether the computer really knows anything about your choices.