Do you take one box or two boxes?

And if the compiter is capable of predicting whether you’re someone who feels that way with any degree of accuracy, the expected payout of someone in your camp is lower than the expected value of someone in mine. You agree with that, right?

Sure, but if the computer is accurate enough, then only people who took one box will find the money inside.

Sure, they’ll know that they could have taken both boxes, but then they’d be far less likely to be in the one box situation.

Yup. But, barring precognition, it still can’t possibly make any difference what you actually do.

This thread is largely about trying to convince an imaginary computer that you would only take one box in this imaginary situation. :laughing:

I don’t think that is true at all. It seems to me that people snap to either the one box or two box camp pretty quickly. It doesn’t seem preposterous to think that if you knew a couple dozen factors about people you could build a model that would predict one vs two boxedness with at least the 75% accuracy we were discussing, for example. And with enough data I’m sure you could go ever higher.

You’re assuming that the only way for the computer to work is to build a perfect model of the entire universe. I don’t think that’s true. I predicted how my wife would answer the question and I don’t have an artificial copy of her running on my gray matter. I’m sure that you could predict how people close to you would answer. Because you can make predictions, even accurate ones, with models that are much simpler than the real thing.

Sure, causality and free will don’t enter into it if you grant that the computer is only accurate “much more often than not”, rather than literally perfect. But in that case you run into the problem DSeid illustrated; presumably there are reasons why the computer is better at predicting some people’s behavior than others, and those reasons could theoretically be discovered and exploited. If not – if it’s completely random – then it’s just the same as the case where the computer is perfect, except it randomly lies some fixed percentage of the time, and it hasn’t been established that the universe works in such a way that such perfection is possible.

And on top of that you’re supposed to imagine that you managed to convince the imaginary computer before you ever heard of it, by doing something you can’t do until after you’ve heard of it.

Well, I’ve been thinking about it a couple days, and I’m still stumped. So clearly I am an unusual person, and not necessarily subject to the same probability of being successfully read as your average member of the teeming masses!

I agree with you that at some point, the empirical evidence could point to the computer being so accurate that taking one box is clearly the correct choice, and it wouldn’t even have to be super accurate for that to happen. But at that point we’re just arguing about how much evidence we think we need in order to reject what intuitively seems like a highly improbable null hypothesis, not about the fundamental nature of the universe.

This is trivially true in the sense that causality only flows in one direction. But it’s also irrelevant. If making a choice does not impact the result, but the only way to make the choice is to do things that impact your chances of success, then while it’s true that the choice doesn’t cause the result, it is correlated with the result because they share causal factors.

If the computer is 75% accurate regardless of what option was picked, which I’m sure you can agree is not outlandish to imagine of a computer or even a particularly good judge of character, then most people who take the challenge and take one box will get a million dollars and most people who take the challenge and take two boxes will only get a thousand.

When your choice is so strongly correlated with the result, you can say it isn’t causally connected, and you’re technically right, but it might as well be.

I guess my confusion stems from not understanding why a predictor that’s 75% accurate seems so outlandish to people. If I heard that a teacher was 75% accurate when performing this game in class just based on knowing their students, I wouldn’t think there is some hidden trick, I’d just think they’re a good judge of character.

I think the issue isn’t that people can’t imagine a 75% accuracy rate, it’s that people tend to overestimate the extent to which they personally are less predictable than others.

But the teacher has a vast amount of information about their students. How, exactly, does this computer make its decisions? It’s a best a profoundly creepy scenario.

Maybe a real one. Maybe Google can already predict this about each of us. But surely you acknowledge that that’s creepy, too.

(Or maybe you don’t find it so. I turned off some Google functionality because i found it too creepy. I realize this doesn’t change what Google knows about me. But it changes whether it rubs that in my face or not.)

The computer is like a school teacher that gives the worst students a gold star.

I must not understand the OP because it makes no sense to me. As I understand it:

  1. The computer knows for a certainty what I am going to do before I even do it.
  2. If I just take box #1, I will get one thousand dollars, but I will be leaving a million dollars in box #2.
  3. If I take both boxes, the million dollars will NOT be in box #2. It will be empty. Thing is, I still get the thousand dollars in box #1 because I took both boxes.

So, no matter what choice I make, I’m going to end up with one thousand dollars so, in essence, it doesn’t even matter what choice I make.

Presumably it also has a bunch of information on the people taking the test.

That would be a complaint for William Newcomb, but unfortunately he’s been dead since I was a single digit number of years old.

I imagine he had even more trouble with people fighting the hypothetical before the rise of modern AI :wink:

You can take the closed box and end up with a million dollars (assuming the computer predicted correctly)

No, because our infallible computer friend knows beforehand I will take both boxes, so box #2 will be empty.

The computer doesn’t know for a certainty. It hasn’t been wrong yet, there’s a difference.

Your choice is between £1000 + the possibly empty box or just the possibly empty box.

Take JUST the closed box leaving the open box with $1,000 behind.

Then I’ll lose the thousand dollars, too, and end up with nothing.

Only if the AI was wrong. If the AI was right, and you take just the one closed box, then the closed box has a million inside.