You walk into a room. Inside are two boxes. One is open, and in it is $1,000.
The other box is closed and opaque.
A supercomputer is in the room. It is extremely good at predicting people. In fact it correctly predicted what 1,000 people would do when presented with the scenario you are about to face.
It tells you that before you walked into the room and were presented with this scenario, it predicted whether you would choose to take one box or two boxes.
If it predicted you will choose one box, it put $1,000,000 into the second opaque box.
If it predicted you will choose two boxes, it put $0 into the second opaque box.
I’m pretty sure there were more recent threads, but I haven’t found them.
Anyway, I don’t care how good the computer is at predicting my actions. It can’t magically change what is in the boxes at the time I make my choice, so I take both of them.
Right. The computer has already made its prediction and decision, so there is no downside to taking both boxes. Very probably there is no upside, though, but there is no cost of hoping for an error in the prediction.
I agree the computer has already made a prediction and can’t change things. However, the only means it could have to make said prediction (assuming no time travel) is looking at my past behaviors. Thus, if I act in accordance with the idea I will take only one box (including convincingly arguing that I would do so whenever the hypothetical comes up), it would hopefully predict I would only take one box. If I entertain the idea I would take two, I’m dooming myself.
While $1000 is nice, and I would take it in ordinary circumstances, I am not that bad off that I need it. Yeah, I’ll be disappointed it I prove the computer was wrong, or if it only made 1000 accurate predictions because everyone always takes both. But there’s a pretty good chance that won’t happen.
And of course, if it can effectively see the future, then it doesn’t matter that it can’t make changes.
I take both boxes. Either way, I’m guaranteed at least $1,000. And let’s be fair. The computer knows I’m going to do that. It must know that I’ve been poor all my life, so the opaque box is going to be empty anyway.
$1,000,000 is nice, $1,000 is a rounding error. Irrelevant. Not worth schlepping.
If the computer is wrong and I walk away empty, so be it. Another data point showing that those tech bros are full of empty promises is worth those lousy $1,000 to me. Framed this way I almost hope the mistery box is empty.
The computer isn’t extremely good at predicting, it’s perfect. That’s why I always thought this “dilemma” was a cheat. “This entity has no supernatural powers, no precognition; it just always behaves exactly as if it does. Go ahead, choose!”
So, in this scenario where the computer effectively has supernatural powers, I choose one box. But, to be clear, such a scenario could not exist.
What I do not understand about the problem: So, the computer made an absolutely accurate prediction of my choice and acted on it before I entered the room. How can my choice, after having entered the room, affect what already is in the second box at that point?
Who says it does? One point of this thought experiment is to highlight how predictable people really are. You cannot even beat a bot at Rock-Paper-Scissors; it will overwhelm you by instantly making a choice based on your move history, and even though you spend however long you like agonizing over your next move, it will still end up being a losing one.
I would take only the opaque box. I don’t really care about $1000 but a million moves the needle a little. If there’s a one in a thousand chance it guessed I would take both boxes, I’m up a million, which I think is my best chance.