Do you take one box or two boxes?

You really can! There is a box right there with 1000 bucks in it. You can take it or leave it. There is no downside to taking it, the only thing that will result is you have 1000 dollars you would not otherwise have.

The more I think about this the more I think the computer will just predict “both boxes” for everyone and be right. No one is going to turn down an actual 1000 dollars of free money.

Oh boy. A Free Will Debate. Yay.

Yeah. My making a decision is an illusion. It is predetermined. There is no choice being made. The computer can analyze down to the butterfly flap impact.

I grant that irrelevancy, yes. No matter how precise the prediction is, absent precognition, once I’m in the room nothing I do can influence what’s in the boxes. Might as well take both.

Nothing I do now can influence what the computer put in the boxes before I entered the room. Choosing one box does nothing to influence something that already occurred.

Do you grant that it would be possible, by knowing a bunch about a person, to predict whether they would fall in my camp or yours with some degree of accuracy better than chance?

Yes. Do you think that choosing one box will change the contents of the boxes? Or if you choose two boxes, same question? Or will the content of the boxes remain exactly what they were regardless of your choice?

Great! So if the computer is even slightly better than random at picking (the math works out to 50.5% or so), the expected value of being a one box picker is higher than the expected value of being a two box picker.

This particular predictor is much more accurate than that, but even one that’s, say, 70% accurate pays out better if you just take the mystery box.

No, of course not. The thought experiment clearly explains that the contents of the box are set before you enter the room when the predictor makes a prediction and is not changed afterwards.

But I think it’s impossible for me to choose two boxes without there being a predictable causal chain leading me to do that that stretches back to before I entered the room, no matter what mental gymnastics I try to perform.

And since I know that the computer successfully and correctly predicted 1,000 people’s decisions before me, I think it’s safe to assume it can follow the causal chain for me, too.

The content of the box will vary based on the predictor’s prediction of my decision, and that prediction is based on the same causal factors that lead to the decision itself.

Not sure how else to explain it. Feel free to be wrong. :grinning_face:

I completely understand your argument - at the moment that you are standing in front of the boxes you could take both, and nothing could stop you.

The point is that if you would do that, then you could never be in that situation to begin with, unless the predictor was wrong. And since it was right 1,000 times in a row, there’s no reason to believe that it would be wrong for you in particular. So the only way for you to be on the situation where there is $1,000,000 in the opaque box is if you would choose to only take one box.

Every previous person who walked into the room and took one box got one million dollars.

Every previous person who walked into the room and took two boxes got one thousand dollars.

When you walk into the room what do you think is different from when those thousand other people walked into the room?

Those were all NPCs, I’m the protagonist!

But there is no risk in this scenario. There is predestination.

That is the hypothetical we are required assumed as the basis of the question: You believe you are making a choice but that is illusory. You are instead following a deterministic series that the computer has been able to calculate out, with a high degree of reliability, chaos theory be damned.

I think this pretty much solved the OP on post #10. It becomes much more obvious that taking one box is the correct answer if you replace “supercomputer” with “magical precognitive mind-reading genie”, which is effectively what the OP seems to mean.

If you admit the possibility of error, however slight, then you should take both boxes.

But the real question is, do you actually have a choice? If your decisions can be predicted with that degree of accuracy, do you even have free will?

edit: ninjaed by DSeid, more or less!

And it could be 1000 two box and 0 one box. Or 999:1, 998:2.

Every single person who owns a Bugatti Veron has a net worth of over 10 million dollars. So if I manage to own a Bugatti Veron that will cause me to suddenly acquire 10s of millions of dollars

I don’t see how that’s the case. Again, the math works out such that you’re better off with the one box even if the computer is only 50.5% likely to predict correctly.

Let’s say that the computer is not infallible, just a pretty good judge of character (on this one very specific question). It can correctly predict how many boxes someone will take only 75% of the time.

There are four possible scenarios:

A - the computer guessed wrong; I take two boxes. I get $1,100,000.

B - the computer guessed right; I take two boxes. I get $1,000.

C - the computer guessed wrong; I take one box. I get 0 dollars.

D - the computer guessed right; I take one box. I get 1,000,000 dollars.

The only thing that I get to decide is how many boxes I take.

If I take two boxes, and the computer is right 75% of the time, then my expected value with two boxes is (.25)(1,100,000) + (.75)(1,000) or $275,750.

If I take one box, and the computer is right 75% of the time, then my expected value with one box is (.75)(1,000,000) + (.25)(0) or $750,000.

So even if the computer is not infallible, just decent at judging character, it’s better to go with one box.

I kind of feel like trying to gainsay the supercomputer is the wrong approach. Who knows why it chose how it did?

Beyond that, this edges into questions of free will and predestination if we assume that it’s so good at predicting things that it gets 1000//1000 people’s choices right. It’s a safe bet that whatever it chose, it will be right.

So from your perspective, you might as well have a 50/50 shot with the closed box- knowing that the computer correctly predicted what you’ll do doesn’t give YOU any more information about what it chose, and by extension what’s in the closed box.

So take the $1000 and fllp the coin on the million bucks. It’s essentially the game show dilemma of “Do you double down, or go home with your $1000 dollars?” except the element of chance is not a ball or wheel or whatever, it’s whatever inscrutable choice the computer made.

And I’d make the same choice in either case- double down, as you don’t really lose anything by doubling down; either way, you leave with more money than you came in with.

You quoted my question and then you didn’t answer it.

Again, what makes it different when you take two boxes from all of the other people who took two boxes?

To address your comment, I’ll point out that you’re reversing cause and effect. Previous people own a expensive car as a result of having a lot of money. If you were to somehow acquire ownership of an expensive car without having a lot of money (and again you’re hand waving away how you were able to do something nobody else was able to do) that would not retroactively cause you to have a lot of money.

Almost no one better on pure EV.

I’ve seen this argument on free will vs determinism before (usually invoked in discussions of an omniscient God) and I tend to disagree with it.

I feel that you can still have the free will to make choices even in a situation where somebody else (God or a supercomputer) is aware of what choices you will make before you make them. To give a trivial example, let’s say I’m putting on my shoes. I can choose to put my left shoe or my right shoe on first. If you somehow know which shoe I will choose, I’m still the one making the choice and I still have the ability to make a choice. Your foreknowledge of what choice I will make did not cause me to make the choice.

But what if, as I suggested above, I have never encountered this sort of logic problem before, or at least never thought about it if I had? What if my every instinct, based on my entire life experience up until now, would make me agree with the two boxers? That’s what the computer would base its prediction on.

But then, the instant before I step into the room, you stop me and present me with this analysis. “You’re right,” I exclaim. “I’d be better off taking only the opaque box.” And so I do. But the computer, already having judged me to be a two-boxer, has left the opaque box empty. I’m out of luck.

What I’m getting at is, doing this kind of analysis is itself an attempt to "outsmart” the computer. Saying that one should always take the opaque box is not acting according to your true instincts and character, it’s acting according to a probability analysis that may well go against what you have typically done in everyday life, when you were just acting on your first impulse rather than subjecting every tiny decision to rigorous mathematical scrutiny.

It’s those mundane, everyday, typical actions and decisions that will provide the computer with the data it’s using to make its prediction, not this one special case where you’re faced with a single, quite atypical (indeed, unprecedented) choice.