But it is infallible. As soon as it’s not, I’m a two-boxer.
ETA: However good its predictive powers are, anything short of infallible precognition means nothing I can do will influence what’s in the boxes, so take ‘em both.
But it is infallible. As soon as it’s not, I’m a two-boxer.
ETA: However good its predictive powers are, anything short of infallible precognition means nothing I can do will influence what’s in the boxes, so take ‘em both.
I don’t follow the logic.
If it’s 99% accurate, what you do has a 99% chance to have been accurately predicted by the computer.
It doesn’t have to be infallible, just slightly better than random.
This should have said “two-boxers”…
Precognition ie magically knowing the future is not necessary for this thought experiment.
Would you agree that knowing someone and their thought process well might improve the odds of making a correct prediction at all, or do you not even grant that?
When you walk into the room there is a fixed amount of money divided amongst the 2 cases. Assuming we believe the hypothetical that fixed amount of money can’t change. Do you want all the money in the room or some of the money?
When you walk into the room there is a fixed amount of money divided amongst the 2 cases.
And what that fixed amount of money is depends on the prediction of a machine that’s very good at predicting my behavior.
Since I want the amount of money to be high, and since I know that the computer is incredibly good at predicting my behavior, I should behave in such a way that the computer will have predicted that I took only one box, so that this box will be full of one million dollars.
Since this computer has a perfect (or near perfect for those who find full perfection impossible) record, the best way I can think of to make the computer think I’m a one boxer is to be a one boxer.
So I take the one box.
The only reason I would not take the one box is if I thought that I was some kind of “chosen one” who is magically capable of fooling the supercomputer.
Assuming we believe the hypothetical that fixed amount of money can’t change. Do you want all the money in the room or some of the money?
I want more money in my hand when I leave, regardless of whether that represents all or most of the money in the room at any given time. The way to maximize the money in my hand when I leave the room is to take just one box, based on the experience of my predecessors. So that’s what I will do.
but you can’t increase the amount of money in the room, that was fixed before you walked in.
The computer cannot change what is in the closed box after I walk into the room, it has only predicted what I would do. So it is already decided and cannot be changed. There is really no decision to be made in the room with the boxes. The computer has made it’s choice too.
If I change my move, the computer cannot magically put a million dollars in the opaque box. It never could. In this scenario of the OP, there never was a choice that hadn’t already been made.
Me? I take both boxes AND the computer, I take the computer outside and do an Office Space Copy machine job on it (movie referece).
but you can’t inrease the amount of money in the room, that was fixed before you walked in
I’ll break it down line by line and you can tell me where we disagree.
It was fixed before I walk in by a computer that’s capable of analyzing the very same factors that actually determine what decision I make.
If a computer is capable of knowing some of those factors, it is capable of making a prediction about what decision I will make.
If the computer knows enough of these factors and has a good enough model of how humans make decisions, then it will be capable of making that prediction with a high degree of accuracy.
Since the computer correctly predicted 1,000 people before me, it is clearly capable of making this prediction with a high degree of accuracy.
I can thus conclude it very likely has access to knowledge about the relevant factors for people taking the trial as well as a functional model of the way humans make decisions.
Since that’s the case, and since it has already proven very capable of correctly making these predictions, I would be foolish to try and outsmart the machine.
Assuming you can only take one box, I’m always going to take box 2.
An extra $1000 won’t make much difference in my life, but a million dollars would help me out quite a bit.
Assuming you can only take one box, I’m always going to take box 2.
That’s a bad assumption, the whole point is to ask if you take box 2 or both boxes.
Yeah that part confuses me.
You can either pick box 1 and get a guaranteed $1000
Or you can pick box 1 and 2, and get a guaranteed $1000, or a guaranteed $1,001,000.
I’m confused
I’ll break it down line by line and you can tell me where we disagree.
It was fixed before I walk in by a computer that’s capable of analyzing the very same factors that actually determine what decision I make.
If a computer is capable of knowing some of those factors, it is capable of making a prediction about what decision I will make.
If the computer knows enough of these factors and has a good enough model of how humans make decisions, then it will be capable of making that prediction with a high degree of accuracy.
Since the computer correctly predicted 1,000 people before me, it is clearly capable of making this prediction with a high degree of accuracy.
I can thus conclude it very likely has access to knowledge about the relevant factors for people taking the trial as well as a functional model of the way humans make decisions.
Since that’s the case, and since it has already proven very capable of correctly making these predictions, I would be foolish to try and outsmart the machine
I don’t disagree with any of that except for the implication that 2 boxers are trying to “outsmart” the machine.
It’s just a cold hard fact that once you’re in the room there is a fixed amount of money in front of you, and you can choose to take all of it or some/none of it. There is nothing you can do now to change the aggregate amount of money in the boxes.
Assume there was another step. Before you make your decision they bring out your best trusted friend and ask her to examine the boxes, then ask her what you should do. What would she say? Why don’t you trust her?
Assume there was another step. Before you make your decision they bring out your best trusted friend and ask her to examine the boxes, then ask her what you should do. What would she say? Why don’t you trust her?
Depends on how smart she is, I guess.
The predictive machine is going to be able to predict her actions, so it will probably simulate both scenarios.
For example, if the AI predicts that if there’s money in both boxes my friend will tell me to take both and that I will listen to her, then the AI will not put $1,000,000 in the closed box.
On the other hand if the AI predicts that my friend will tell me to take the money from the opaque box because she understands the rules of the game and understands that she would not be allowed to see the money in the opaque box unless she tells me to take just that box, then that’s what my friend will tell me and what I will do.
Of course, the AI might realize that even if my friend is a two boxer I am not. And then the AI would put the million in the opaque box; my friend will tell me that there’s money in both boxes; and I will go on to take the one box with $1,000,000 in it because I know that if that’s not what I would do the money wouldn’t have been in there in the first place.
OK flip it around, you’re the friend. You can’t talk about how much money is in there just what she should do.
You see £1,001,000, what do you advise?
You see £1000, what do you advise?
OK flip it around, you’re the friend. You can’t talk about how much money is in there just what she should do.
Do I know the rules the computer is operating under?
If so:
You see £1,001,000, what do you advise?
“Take the one opaque box”
You see £1000, what do you advise?
“You might as well take both boxes”
If not, then I would tell them to take both boxes in both cases, but that would mean that scenario 1 is far, far, far less likely to be the scenario I find myself in.
And what that fixed amount of money is depends on the prediction of a machine that’s very good at predicting my behavior
But it has already decided how much money is in the room. Your behavior cannot change that. You can only decide to leave some money in the room, or not.
Overall I vote this is a dumb problem / thought experiment, or I’m too dumb to understand its subtleties. Or something got garbled in the problem statement.
Agree. I don’t have time to overthink this. Grab both and take the guaranteed $1000. If the second box has money in it, yay.
If we’re to take the premise of the scenario at face value, you’re not really deciding which box to take at the time they’re presented to you; the “decision” took place earlier, in the form of the sum total of your personality and what invisible factors would lead you to end up requesting the box choice that you did. IOW, even presuming that you have “free” will, that will was unknowingly exercised far earlier than you’re aware of., Again, presuming that the predictor is as accurate as it claims to be.
But it has already decided how much money is in the room. Your behavior cannot change that. You can only decide to leave some money in the room, or not.
I can’t choose to take both boxes without causal factors that predate me entering the room lining up, and the computer is clearly able to analyze those factors before it decides whether to put the money in the second box. The only way that the second box has money in it is if I decide to take only the second box.