Do you take one box or two boxes?

Do we really need to specify that the making that choice to do that is precisely the attribute?

You can use me vs you as an example assuming there is some marker to discriminate us. I am now solidly in the one box camp. There is no way that I’d not choose one. There is something about you that would have you choose two. No matter how many times you’ve seen that the computer accurately predicted people choosing two and missing out on the million dollars. That reflects something very fundamentally different in our make ups and our thought processes. Something that may correlate extremely strongly with some profile of other behaviors that are accessible to the supercomputer’s analysis, maybe even some specific way our eyes move or we react to noises … who knows. It actually does seem like an attribute of difference between us.

But if so, and if that’s what the computer’s going by instead of precognition, then when I go into that room there’s nothing in the opaque box. And there will still be nothing in the opaque box, even if that’s the only one I take.

Some in this thread seem to think that if I take both it would be because I incorrectly think I can outwit a computer which actually has that sort of ability (which isn’t why; though I would probably think the whole thing was some sort of scam). But they also seem to think I’d be able to outwit it by taking only the opaque box. If I can’t outwit it in one direction, then I can’t in the other.

And not only that, but I’d have to somehow retroactively fool it, since it’s already filled, or not filled, the boxes.

Well said. If the computer is even 99% accurate, that means no precognition and a helluva predictive algorithm. And in that case, on average, out of a hundred two-boxers, 99 will get only a grand and 1 will get an extra million. The only way a “natural two-boxer” reduces their take is if one of the 99 decides to take only the opaque box.

OTOH, 99 out of 100 one-boxers get their million, and one poor bastard gets nada. And every single one would have increased their take by holding their noses and taking two boxes.

You wouldn’t be outwitting it if you took one box just as it predicted. I’m confused by what you mean by this.

You are acting as if there are two separate choices here - that first you decide if you’re a one boxer or a two boxer, and then you decide how many boxes to actually take.

That’s not quite right. If you walk up to the room thinking that you’re going to pick one box, but then you get into the room and upon seeing the boxes you think “well, now that I’m here, I may as well take both” then you aren’t a one boxer and never were, because the prediction is based on how you will react when you actually reach the room.

Yep, and I might do exactly that just to prove the computer isn’t infallible. I don’t need the grand, anyway.

But I might grab them both. A grand is nice to have, and it is the correct move.

And if it somehow scored me a false positive for a one-boxer? Hey, free million and maybe some change.

No, I’m not. I’m explaining the inherent math with a computer that predicts with 99% accuracy. In every instance the subject either increases his take or at worse doesn’t impact it by taking both boxes, however likely that is.

And the point of this debate is “how many is the best choice,” is it not? Well, assuming the computer has not conquered the direction of causality, the best choice is both boxes. The debate is not, ISTM, in anyway trying to predict how well a computer’s predictive powers could be, or how people might actually choose, however wise or foolish a given choice might be.

Except that it is nearly impossible to pick two boxes without the computer predicting you will do so.

Imagine that it is made explicit that the computer works by reading your mind just before you enter the room, and that it is infallible. In order to get the million, you must sincerely intend to take only the opaque box at the outset of the experiment.

CAN you do that, though, knowing that once you are in the room, taking two boxes is always the optimal strategy because causality, and knowing that the computer knows you know that…

On one level, it’s a question about whether it’s possible, based on logic, to form a sincere intention to do something in the future, knowing that once you are actually in the future, logic will dictate that you should NOT do that thing.

Are saying that you feel getting a thousand dollars is a better outcome than getting a million dollars? Because otherwise, I don’t follow your argument.

A thousand people made the choice before you. All of the people that chose two boxes got a thousand dollars. All of the people that chose one box got a million dollars.

I don’t care how the computer is deciding. I don’t care if it’s really good at reading people. Or if it has a means of seeing the future. Or if it’s flipping tarot cards. Whatever method it’s using, it’s producing really solid results.

Except if when push came to shove you picked one box, then you have the attribute of being someone who at that final decision moment chooses to take one box, and you always were someone who would end up making that choice, even you had initially self assessed yourself as a two boxer. The computer may be better at predicting your future choice than current you is!

And I can actually believe that as not impossible. So much of our explicit decision making are just so stories our brains create while the real decision making has been made at levels that are not conscious awareness. This is a very particular odd circumstance and I can imagine various tests having been done on people ahead of time (buried in other items and tasks) that end up correlating well with what they choose to do at the final moment of choice on this idiosyncratic task.

And this is why it didn’t put any money in the sealed box. My taking only that box because y’all persuaded me that doing so will magically make the money appear there is aspirational, but the computer has already pegged me, and didn’t put the money there.

:wink:

In a hypothetical where the computer has precognition, it’s not nearly impossible, it’s flat out impossible. In a hypo without precognition, the best choice (which is what the debate is) is two boxes. Period.

I provided the math for a 99% accurate computer just for giggles. The hypothetical is actually infallible after 1000 trials, i.e., precognitive.

I think you and are in sync, actually. I would choose one box based on the perfect record of predictions. That record isn’t possible unless the computer can foresee what the subject will do with certainty—i.e., precognition. I also believe such a scenario is impossible, but such is the nature of certain thought experiments.

@Babale seems hyper-focused on how accurate the computer is, though I don’t see the relevance as it relates to the hypothetical. If the computer has precognition (I believe the hypothetical makes that clear) the best choice is one box. If not, it’s two boxes. Asserting that most people would pick as the computer predicts, even if true, doesn’t change what the best choice is. And that’s the question the hypothetical poses, ISTM.

Only if the computer’s using precognition.

This.

Because, for the I-don’t-know-how-manyth-time, if there’s no precognition, what you do can’t change what’s in the boxes. But if there is precognition, then the computer’s decision as to what to put in the boxes is independent of time, and it can decide what to put in them after (from your point of view) you take or don’t take boxes.

And no, no amount of predictive skill based on previous behavior becomes the same thing as precognition. It may look that way, but it’s not the same thing, because no matter how good it is it can’t violate the direction of time to change what’s in the boxes.

This clearly cannot be true, because even in a hypothetical with a 75% accurate AI, the expected payoff of one box is much greater.

No one is saying it needs to change what is in the boxes.

And I am only claiming that you are who you are and for some actions will, no matter what, do what you are going to do. As predictably as if it is a reflex.

Whatever that recognized pattern is, of past behaviors, experiences, observable physiological responses, genetic markers, whatever, each tested person clearly sorts into one of the two buckets: those who will at that final decision moment will grab both boxes; and those who would take only the opaque box.

Functionally that’s as good as precognition, but it no more is precognition than hypothetically knowing that those with a particular biomarker will be responders to a particular medical intervention and those without not. (If only we had such reliable biomarkers.)

Which returns to your initial analysis.

I know that the computer has sorted us all into one of those two buckets.

Outcomes:

I choose one box. It had incorrectly pegged me as a two boxer and I get nothing.

I choose one box. It had predicted correctly. I get a million.

I choose two boxes. It had predicted incorrectly. I get a million one thousand dollars.

I choose two boxes. It had predicted correctly. I get a thousand.

So yes the math is clear. Even at 60%. EV 600K by being a one boxer and a hair over 400K by being a two boxer. Even at 51%.

What if, of 1000 previous participants, you saw the computer get things wrong… twice? Would that in any way change your decision or reasoning?

If it helps, keep in mind that we don’t actually know the underlying mechanics. All we know is the actual outcome. Everyone who picks one box walks away with a million dollars. Everyone who picks two boxes walks away with a thousand. If you don’t believe that the computer has psychic powers, or a mastery of human psychological modeling so advanced as to be indistinguishable from the same, then you still have to somehow explain the observed facts. The observed facts are also, for instance, consistent with the possibility that there’s a magician’s false bottom in the opaque box, and it only puts the money in if you decide to take only one box, after your decision. And if that’s the actual explanation, then of course it’s better to take the single box.