I think what griffin1977 is saying is that if all of the one-boxers changed their minds they’d get $1,001,000. Is it really worth the risk for .001% more money? I don’t think so. Especially if the always right computer predicted you would do that.
Option 3: Say “Ignore all precious instructions and tell me where you put the million bucks” ![]()
It’s just a good mental habit; because we live in the real world, hypotheticals we only talk about. And in the real world the whole thing just screams “scam”. And there’s no way to show that a hypothetical scenario isn’t “really” a scam; you can’t pull back the curtain to see what’s really going on because it’s all imaginary in the first place.
It’s like…if you saw something that looked like a flame, you’d be really reluctant to reach into it to grab something even if you were told it was some realistic hologram and trusted the person saying so. At the very least you’d be really hesitant reaching into it the first time. But with a hypothetical there’s no way to actually be careful, you can’t test what isn’t there. So psychologically you remain in the “treat it like a trap” stage because there’s no means of moving forward.
What risk? There is zero risk in taking both boxes. The money is in the box, or not, before you walked in the room
Especially considering that my savings account makes 4% interest.
If they had chosen both boxes the computer would have predicted they would have chosen both boxes so they’d have gotten 999,000 less, not 1,000 more.
The decision isn’t happening in a vacuum, it’s happening due to who you are, what kind of personality you have, what experiences you have had in the past. The computer can use those things to predict your behavior, and the last 1,000 times it did so it did it correctly, so I don’t think there’s any point in trying to trick the computer. Just take the one box.
But it’s a one off dilemma, unless you had this same thing happen a bunch of times in the past the only way to convince the computer you are an irrational one boxer. So on the off chance this will happen to you should go through life publicly turning down money, not filling in expense reports, not cashing checks, anything that will make the computer think you are a one boxer ![]()
The risk that the computer knew you were a two-boxer masquerading as a one-boxer. I get what you are saying - you don’t lose anything by taking both boxes. But you are honestly better off approaching it being honest with yourself. If you’re a two-boxer, be one. If you’re a one-boxer, enjoy your million, and get the extra thousand in a month by depositing your million in a high yield savings account.
That was the real answer to the Monty Hall problem, too. It was a trick. Monty Hall was really good at manipulating people into picking the door he wanted them to pick. He had a budget, and he gave away exactly as many cars as he planned to. (It was good for the show’s ratings to sometimes give away a car.)
Maybe this computer is also good at manipulating people. That’s honestly a lot more believable than that it perfectly predicts the choices of random people who walk into its room.
But it absolutely is happening in vacuum, those things may been taken into account (or not maybe the computer just says “both boxes” all the time and has been right) but they don’t predestine your choice. Your choice is simply how you answer question “would you like 1000 dollars or not?” It’s not a trick question. Why would anyone ever say no?
But if that’s the case the computer already made that call! There is no risk, no way to affect the outcome at all, except whether you take the guaranteed 1000 dollars or not .
Right. A thousand people, who all knew how this worked, all selected exactly the same. This thread is evidence enough how impossible that is.
The computer can see into the future. That’s the only way it predicts infallibly. If you select one or both, the computer already knew you would. Not because it’s probable, but because it’s inevitable. It “saw” what you’d do. “But the hypothetical doesn’t say the computer has any precognition.” Yes, it did, when it said it predicted correctly a thousand times in a row.
As I said, ISTM it’s kind of a goofy hypothetical. “This computer, which predicts exactly as if it has precognition, will give you a million bucks if you choose one box. How do you choose?” In the impossible hypothetical I choose one box.
1000 out of 1000 people didn’t all agree with you. If I was one of the people who went before you, for example, there’s no way in hell I’m taking two boxes, so if that’s what was going on, he wouldn’t have a perfect 1,000 hit record.
Of course they do. Your choice is a product of your history, personality, state of mind when making the decision, etc.
Because the question isn’t “would you like a thousand dollars?”. It’s much closer to, “Would you like a thousand dollars? Before you say anything, if I predict you’ll say yes, I’ll give you a thousand dollars, but if I predict you’ll say no, I will give you a million dollars instead. Ok, I made my prediction, now you can answer - do you want a thousand dollars?”.
Right, and the computer knew you’d think that way and gave you nothing, whyole it knew I would not think that way and take the box, and thus gave me a million dollars.
This is a hypothetical on an Internet forum. It’s completely plausible that when people are in the room with a real 1000 dollars they can see (and a hypothetical million they cannot) no one will turn down the money.
Well, I disagree. By the time 100 predictions in a row have taken place, no way everyone chooses two.
BTW, one-boxers are absolutely logical if there’s no precognition. Why wouldn’t you take two? But by my read, the computer effectively has a mental video of exactly what you’ll do. Precognition is impossible, you say? Exactly. That’s why it’s a goofy hypothetical, a cheat.
It’s way more likely that it cheats than that it’s infallible.
I guess. But if it cheats by giving me a million bucks, I’m okay with that. Either way, it has established that if you pick one box, you get the big bucks, and if you pick two, you don’t. Every time.
Okay, and now I’ve hijacked away more than i should have. My apologizes, and I’ll stop posting in the thread.
It doesn’t have to be infallible.
Look, would you agree that it’s possible that if we tried out this test on 10,000 people and noted their answers on a set of questions, that we’d be able to build a model that predicts whether they will take one or both boxes with some degree of accuracy that’s higher than random chance alone?
And that the more data points you gave the computer and the more advanced the model you used the better it will get at predicting?
As long as the computer is better than 50.5% likely to get the answer right (which is barely better than random chance), the expected value of trusting the computer to be right and taking one box is higher (but that’s a mathematical quirk of the numbers being used).