Do you take one box or two boxes?

I now think that the answer to this depends on what the computer is actually doing.

If the computer’s making its decision based on your previous history: then what’s in the boxes is determined entirely by what’s happened in the past. There is nothing whatsoever that either you or the computer can do that will now change what’s in the boxes. The computer isn’t going to re-open the boxes or sneak into them from underneath and take money out or put money in. And your behavior from this point on is entirely irrelevant to what’s in the boxes, because your behavior after the computer originally filled one box can’t change what’s in them. So you may as well take both boxes; and, if $1000 is an amount that matters to you, you’d better take both boxes. The entire idea that what you do will affect what’s in the boxes is, in this scenario, nonsense.

But if what’s happened is that the computer has somehow developed precognition, then its decision as to what to put in the boxes isn’t based on what’s already happened. It’s based on what’s going to happen. And in that case, what you’re about to do did affect the computer’s decision; so you’d better take only the opaque box.

Well, the computer explains the problem to you when you walk in the room, so you don’t need to have had heard about it previously for the hypothetical to work.

Presumably the computer is good enough at predicting to predict your response to being told about the problem for the first time and loads the boxes up based on that prediction.

Before I showed my wife the Veritasium video, I thought about the arguments the one boxers and two boxers made in that video, and thought she’d probably be a two boxer; I was right. That might be luck, or it might be because I know my wife well; I don’t have dozens of people I know as well as I know my wife to run independent tests on, but I don’t find the idea that it’s possible to predict how someone will respond outlandish.

When I first watched the video, my first response was that I should obviously take just the one box because that will ensure it has a million dollars in it (and I paused it when they presented the scenario, before I heard any arguments). The statistical analysis came afterwards when I tried to justify why I feel that way even if the computer is not a perfect predictor but just pretty smart.

From what they said in the video, the first response people have is generally 50:50.

It’s still going to make a difference whether you’re used to thinking in terms of logic problems. Quite a lot of people aren’t.

When it comes to a decision like this? If you’re familiar with expected value calculations I’m not sure why you’d perform any other type of analysis.

Sure, that’s part of why I predicted my wife would not pick two boxes. Logic puzzles piss her off :rofl:

That’s why I don’t find it outlandish that a computer with knowledge of your experience and personality and a sophisticated enough model could make a prediction that would be accurate.

Ya think?

Yes we have seen them before. Since High School for many of us.

The premise is the future is theoretically knowable and unalterable. Whether we will choose (or “choose”) right or left shoe first today and the exact amount of toothpaste we will choose to put on our toothbrushes 279 days from now are, theoretically knowable now. You could not choose to do otherwise.

Placed in that universe in this box situation you will make the choice you were always going to make. Whatever it was. Whatever you conclude “logically” is what you were always going to conclude and you can’t change that future. It is the same as what was in the past.

Except that as stated multiple times now, it’s not a free will argument. You should still take one box if the computer is only 75% accurate, for example.

OTOH, if the computer is only 75% accurate, it’s at least theoretically reasonable to try to figure out how the “accurate” group differs from the “inaccurate” group, and to see if we can find a strategy that increases our personal chances of being in the “inaccurate” group. At that point, it becomes a scientific problem, not a logic puzzle. Our answer will also have a lot to do with how inclined we are to see ourselves as special people who are much more likely than average to find the correct solution to these sort of problems.

Taking the “strong” version of the puzzle, where the computer is, for practical purposes, infallible, I think it’s like one of those “is it a duck or a rabbit” drawings. Two contradictory propositions both seem to be logically true. If everyone who takes only one box gets a million dollars, and everyone who takes two boxes gets only a thousand, then obviously you should take only one. But it’s equally obvious that the amount of money in the boxes is already fixed when you make your decision, so you should take all the money on the table rather than risk leaving some behind.

I think the only conclusion I can draw is that the universe simply doesn’t work in such a way that this hypothetical could ever occur, and people have different and equally valid ways of thinking about exactly where the logic breaks down.

Because people’s utility curve is not linear with $. Because people, for whatever reason, are less likely to want to risk giving something up than to risk something they never had. So i think a significant fraction of the question boils down to the extent to which a person feels they already “have” the thousand dollars. And the utility of that $1000 to them. If a thousand is trivial and a million is life changing, you are likely a one-boxer. If the thousand is life changing, a person is much less likely to risk it no matter how persuasive the computer is.

And because all my training and experience tells me to be distrustful of novel situations that look too good to be true. I’m still me in this hypothetical. On a fundamental level, i don’t trust this computer, and wonder if the thousand people who went before me are made up, or confederates in some complicated scam.

Good answer. I think that if I were actually confronted with this situation in real life, knowing that the computer is highly accurate (even if not perfect), I’d be a one-boxer. But if the opaque box potentially contained only $2000, I’d be inclined to be a two-boxer. It doesn’t make sense mathematically, but there it is.

Clearly puzzlegal needs to write some grants, so we can advance science by actually performing this experiment on each other :wink:

I’m pretty sure it’s @Spice_Weasel who writes grants. I do math for a living, not writing.

Oh, my apologies! Hopefully you consider it a compliment to be confused with her :slight_smile:

Yes, thanks. i think she’s a much better human being that i. :grinning_face:

So as an example, one 75% correct scenario:

75% of the people have chosen two boxes and in each of those cases the computer was right! They each got a thousand dollars.

25% of the people guessed one box. The computer was wrong each time. They each got nothing.

No million dollar winners yet and as the experiment goes on more and more choose two boxes and the computer prediction is correct with higher and higher accuracy.

What should I choose to do?

There’s some profound misunderstanding in this thread of how causality works in our universe. The decision for what to put in the boxes is independent of what the subject later chooses—unless, of course, the computer has precognition or it is somehow cheating.

Sure, and if you don’t believe in the possibility of precognition, you have to believe it’s cheating if it seems to be consistently guessing perfectly. But at a certain point, if everyone who takes two boxes gets only $1000, and everyone who takes one box gets a million, you should assume it’s cheating in a highly predictable way, so you take only one box.

Yes, but that obviously isn’t what the hypothetical is trying to get at. It’s not meant to be a trick question.

The decision of what to put in the box is independent from the latter decision of how many boxes to take.

It is not independent from the prior state of the universe.

It is in fact dependent on the person making the decision, their personality, their life experience, etc.

With a knowledgeable enough AI, the decision of what to put in the box could be dependent on the same factors that determined what decision you make.

Why do you have to believe in precognition? That’s the part I don’t understand here.

Here are the assertions I’m making; where is precognition required?

A: What you do is dependent on various factors.

B: It is possible to know those factors and based on that make a prediction. Even if quantum randomness or libertarian free will come into play at some level, it is still possible to make a prediction that is significantly better than random.

C: It is possible to imagine an AI that’s good enough at analyzing these factors to make a prediction that’s accurate much more often than not, resulting in a long chain of correct predictions.

What part of that requires belief in recognition or disallows free will?

Again, unless the computer has precognition, however accurately it predicts, once the boxes are filled, taking two has zero downside. None.

A savvy enough computer might predict with reasonable accuracy that a two-boxer will be a two-boxer or a one-boxer will be a one-boxer. But once the prediction is made, you can’t retro-force it by choosing one box. The computer either got it right or it didn’t, but once the boxes are filled there’s no downside to taking both.

Yes, but the existence of a “knowledgeable enough AI” is arguably no less preposterous than the existence of a precognitive genie.

And then we get into quantum physics. The computer can only be perfectly accurate if the universe works in such a way that it is indeed possible to predict all future states given sufficient information about its present state. But we don’t know if that’s true; what if there is randomness involved? Then the computer can only make probabilistic predictions, but some (perhaps infinitesimal) percentage of the time, the improbable thing will happen and it will be wrong.