Argh!! I’m about to put a hammer to your billion dollar genius computer. I just made up that third choice. ![]()
I flipped a coined and was right that it would be heads. What are the odds ?!?
My wife and I are doing a kitchen redo so are ordering in and eating out lots. Tonight we will go out. I have zero confidence in my prediction of where she wants to go. And I know her better than anyone else in the universe.
On the Wikipedia page there is an addendum to some variants whereby the computer also knows if you will be flipping a coin and places just the thousand dollars, not the million.
Compared to predicting the behavior of an individual human, though, predicting the outcome of a coin flip is trivial, because it just involves basic physics rather than neuropsychology. So using a random strategy doesn’t really seem feasible.
Here’s a question for the two boxers.
What if I told you that the computer is only wrong one time in a thousand? So if you choose the second box you have a 99.9% chance of it containing $1m. Then would you take only the mystery box?
No. I get that some people disagree but please try to understand the argument. The 2 boxer view is that what has happened has happened. It is axiomatically true that the state of the box when opened is the same as the state of the box pre-choice. You cannot now change it. 2 boxers are not trying to fool the machine or get a crafty £1,001,000. They know they’ll only get £1000 but they also know that they can’t change it because it was already fixed.
The machine is preemptively punishing them for being able to reason.
So what? In what way does this suggest what choice I should make?
Yes.
There is a broken understanding of cause and effect that seems to permeate this hypothetical.
What trick? If the money is already placed, then no decision can be a “trick”, as it will have no impact on the results.
But you cannot be a “one boxer” until the moment of decision is upon you, at which point it doesn’t matter.
You are making an assumption that how the computer decides is directly related to your stated reasoning in advance about this specific problem and your somehow-known follow through after the fact.
What if half of those “one boxers” are actually “two boxers” who changed their minds at the last moment? And vice versa?
We have no information about how the computer makes its predictions, or about the mindsets of those who ultimately make one decision or another.
What if you die in the room before you can state your choice?
So why not just take the Mystery box? Then the predetermined outcome will be the one you want.
You are denying your agency in the matter. You have purposefully refused to make the choice that would net you the biggest reward.
Huh? This makes no sense to me. What do you mean by ‘they know they’ll only get $1000 but they also know that they can’t change it because it was already fixed’? If they take the closed box, they’ll get a million. They can choose what box to take.
Whats the purpose of taking two boxes if you aren’t trying to het $1,001,000?
Because the boxes are fixed before I entered. The logic is clear that x+1000 is a bigger number than x and x is now an unchangeable variable.
Because I understand this logic I fully expect the computer to leave the box empty, but I can’t change that now.
It’s clearer to me that $1m is greater than $1k. All you have to do to get the million is take the mystery box by itself. There is no logic in taking the second box. Your approach is self-defeating.
It seems that two boxers are trying to fight the hypothetical of the hundreds of previous boxes that were successfully guessed by the algorithm. While it does seem weird, so does the hypothetical of there definitely being no tricks that don’t require causality to be broken.
If you saw the machine guess 1000 people previously correctly, there is a very very very tiny chance of this happening randomly. Probably a higher chance that the computer is just really good at psychology, but an even greater chance that there is some trick going on.
If you insist that the computer must have rolled 1 instead of 2 1000 times in a row, an insanely small probability, then it isn’t that much of an ask to insist that there could be other tricks involved as well.
If someone can predict your decision at the moment of decision, why does your decision not matter?
I don’t care how the computer makes a prediction, I just care about the apparent fact that it can reliably predict my decision and then act accordingly.
If the computer predicted their final decision ahead of time, it clearly knew that they were going to change their minds at the last second, otherwise it would have gotten its prediction wrong.
We have the only information we need: it reliably knows what people will pick and loads the boxes up such that the most beneficial choice is taking the single closed box.
Why do we need to know anything else? Maybe if we knew the mechanics by which the computer works we can try to fool it to get $1,001,000, but if we are good with a round million, we know everything we need to to secure that reliably. Just take one box.
The only thing the predictor is known to make predictors about is our decision on one vs two boxes. Maybe it can predict death, maybe it would just predict what you would have done had you not been killed.
What is the point of that question?
Of course outside of a hypothetical where one of the axioms is that there is no cheating going on, 2 boxing would be extremely stupid.
It’s true that x+1000 is bigger than x, but the scenario in which you are going to take both boxes and the scenario in which you decide to take one box are not equivalent. It’s x+1000 vs y where x = 0 and y = 1000000. 0+1000 is less than 1000000.
You could easily change that by internalizing the logic i presented and just taking one box.
That only works if the computer makes its prediction after you know about the game. If the decision to put a million in the closed box is made before you hear about the rules of the game, then there isn’t a damn thing you could do to change your odds.
Another idea, which makes sense or not depending on your feelings about the nature of consciousness:
Assume the computer operates by creating a highly detailed and accurate simulation of your brain, and then confronts this simulation with the box choice in order to determine how it should proceed. But since your brain has the property of consciousness, then the simulated you would also have to have the property of consciousness in order to accurately represent your thought processes. Therefore, you have to consider the possibility that YOU are actually the simulation, and your actions will determine whether the real you gets the million or not. In that case, one-boxing is obviously the way to go.
Not really. If I’m the real me 2-boxing is the way to go for reasons already explained. If I’m a simulation then the guy who gets the advantage of me one boxing is not “me” and fuck that guy. Either way if I don’t know if I’m “me” or not it’s still better to 2 box.
But if you’re the simulation, you’re screwed anyway, since once the computer has its answer it won’t need to keep running you and you will “die”. So you might as well make the choice that will benefit someone who is very much like you.