So you’d cut off your finger to increase the EV of the outcome? Knowing there can be no casual relationship between removing a finger and getting a million dollars?
No and it doesn’t follow. At all
But I’d potentially be curious about such an odd association.
What are your numbers for this less than ten finger bit?
Let’s imagine that it’s half and half each choice, not far off from our polling.
495 out of the 500 one box choice folk were missing a finger and none in the two box. Only 5 of the 505 ten fingered folk chose one box. All one boxers got a million dollars, including those 5 ten fingered folk. All two boxers left with a thousand dollars.
Or are there finger deficient folk who chose two boxes as well?
Something is very special in this population that all the finger missing folk are doing the action that gets them a million, and very very few of those who are fully (fooly) fingered.
What exactly is the scenario here??? How do imagine getting your freak association higher predictive value than the predictive value of being a person who will end up picking one box?
In this absurd case we can replace missing a finger (or several) with any other attribute or set of attributes that the computer has identified as being predictive of the final decision. Yeah trivial for the computer if those missing a finger (or more) always choose one box, never two and those with ten fingers almost always go for two.
If I was actually confident that is what is was deciding by, then … I’d leave my fingers alone because, see a different thread, a million would not change my life much but missing a finger would annoy. And I’d look at the five who had ten fingers and still chose one and won the million intently.
In the actual scenario, the best way for me to predicted to be a one boxer, whatever it is that it reads as the tell that such will be my final final choice, is to be that person, who is convinced and remains convinced that such is the rational choice.
Doesn’t matter, we aren’t talking about the EV of taking one vs two boxes, it’s the EV of choosing to open the box(or boxes) with 10 vs 9 fingers
The expected value of opening box with 9 fingers is far far higher than 10 fingers. Why is that EV any less valid than the EV in the one vs two case? Why “should you” choose one box because that’s what the EV says but not cut off you finger because that’s what the EV says?
So what do you recommend to your friend who’s drowning in debt and about to be made homeless with his family? The difference in EV between the 9 and 10 finger case would absolutely be worth more to him than the loss of his finger. You’ll recommend that he “should” cut off his finger for the increase in EV? Even though you know it cannot change the outcome?
The chances that there is an organization that gives away a million or a thousand dollars is small but worth taking the risk. The chances that they have a bunch of confederates that have ganged up to fake giving each other a million or a thousand bucks is not small, but is still worth the risk of taking one box.
On the other hand, the chance is higher that the previous people are all in cahoots and just get off on people cutting off their own fingers.
Both of those scenarios are much, much more likely than a computer that rolled correctly 1000 times in a row. Do you know how unlikely that is? I believe the whole world could play until the heat death of the universe and not have the computer guess like that randomly. But I could be off by a couple zeroes on that. In fact I’d say it’s much more likely that my understanding of causality is wrong. So given the very unlikely scenario of 1000 people cutting or not cutting off their fingers and getting either a million or a thousand dollars, then yes, the right thing to do would be the amputation. But like I said before, if someone is making that big of an ask of me, the chances of there being some sort of trick increase immensely.
Why does any of that matter? Remember we know, 100% definitely for sure, that neither cutting off your finger or leaving box A will cause box B to have a million bucks in it. The money either there or not when you walk in the room. But in both cases ( "leave box A” and “cut off your finger”) the expected value is larger. So if you “should” leave box A then you “should” cutoff your finger.
There is a very simple yet very important difference.
Cutting off your finger is not causally connected with finding a million dollars in the box.
Picking one box is not causally connected with finding a million dollars in the box.
Picking one box is causally connected with being the sort of person who is predicted to pick one box.
Being the sort of person who is predicted to pick one box is causally connected with finding a million dollars in the box.
No it’s not. There is no way the choice can effect a prediction that has already been made. Anymore than cutting off your finger can change that prediction. We have no idea what factors were included in that prediction, but we absolutely know your future choice cannot be one of them, because it hasn’t happened yet.
Of course it does. Is it 90% of million dollar winners and 90% of thousand dollar winners that have fingerpenia? Is it 90% of one box pickers and none of the two box pickers?
If the EV is already optimized by being a one boxer then why do anything else?
As to my soon to be homeless friend? A friend can always stay with me for a bit.
I am going to ask him about Bugattis. What model?
He without fingers knows picking some Bugattis.
It is 100% clear that you would with great probability be a two box picker. I don’t need to be a supercomputer or have precognitive abilities to make that assessment. You would get a thousand dollars.
Yup. If somehow I and the supercomputer had predicted wrong about you and you actually went with the opaque box only, even the supercomputer had not seen the signs that you’d be a last minute to one box flipper, then you’d end up with nothing. Unless the computer was good enough at having seen the past patterns that it had predicted that.
I don’t know what those associated patterns might be. In your hypothetical maybe it includes missing one or more fingers! Given how many of them picked one box and won maybe? (Is this a deli meat slicer convention? Reunion for those who played with big fireworks as kids?)
Step one to win (at least) the million dollars is to have the computer predict that you will be a one box picker. If it has pegged you as a two box picker you can only win a max of a thousand dollars. Again you get a thousand. Enjoy!
The best way to convince the computer to that I am a one box picker, and to put the million in the box, is to be that person. Whatever the things it reads will then be there. Not to be someone who pretends to be that person and then switch. Or who even starts out sincerely planning to do one box but can be easily swayed to do both. Those people have apparently had some sign ahead of time that the computer flagged and categorized as “two boxer.” And at that point left the opaque box empty. No, getting the computer to read you as a one boxer requires actually being a person who will follow through and do that when the time comes. Who has that right stuff.
You’re in the “don’t understand causality” group. Congratulations the computer has given you a gold star!
What do you think causes the supercomputer to put the million dollars in the opaque box?
My take is that it has identified some attribute or sets of attributes that strongly correlate with particular future behaviors. It can discriminate between: a group of people who will see every single one box pickers getting million dollars and no two box pickers getting a million dollars and say “just the opaque box please.”; and another group that will see that same thing and say “whatever is in the opaque box is now fixed and I can no longer change it. Causality! I want the thousand as well as the now fixed unknown. Two boxes please!”
So far it has correctly sorted the future behaviors every time.
My conclusion is that when I do choose a single opaque box it will have also sorted me correctly, by whatever algorithm it uses. And when you choose to take both it will have also used its algorithm to identify your future choice correctly.
Yes I am going to get the gold star of a million dollars in this game and you are going to walk away with a thousand. Not sure if that counts as a star at all. Seems more like getting the home game version …
This is where this whole argument feels weakest to me.
We have no information about how the computer knows, nor about how individuals make their choice when actually faced with the choice.
I think you (and others) are arguing that being confident today in one’s choice to pick one box is a prime predictor for whether one will pick one box in the future. But we don’t know that, or have any reason to believe that is more true than any other possibility.
What if stated choice today has minimal or no correlation to what people actually do in the moment?
We don’t actually know how to “be that person”… not “that person who picks one box”, but “that person who the computer thinks will pick one box”, which are not the same thing.
In which case all the discussion about being, in advance, the kind of person who picks one box seems irrelevant.
No, it doesn’t matter whether you are confident in your decision today. It matters what you’ll end up doing when push comes to shove.
Sure we do! Since the computer has a pretty much perfect track record, and since I don’t imagine that I am unique and special compared to everyone else, I can think of a good way to ensure the computer predicts that I’ll take one box - and that is to actually take one box.
I don’t care how the computer decides that I’m a person with one box, just that it does.
Thank you for writing the post i was struggling to write.
You have much more faith than i have. And hey, maybe that’s what the computer is basing its predictions on. I’m pretty sure it’s going to peg me as a two boxer, if it’s that accurate, because i fundamentally don’t trust it. So if i take one box, i walk away with nothing.
What if stated choice today has minimal or no correlation to what people actually do in the moment?
I explicitly state that it does not. Not in the “actual” game.
It is running some algorithm that is very good at predicting the behavior that people will do in that moment. We don’t know how that works but we know it does.
I can easily think of those who would pick the opaque only and those who would take both as having essential differences, in cognitive world view etc., and our individual decisions then being very predictable given knowledge of whatever turns out to be the dataset needed to discriminate between us.
In this game those of us whose cognitive worldview is such that we will, in that moment, no matter what we say now, pick one box, are consistently being correctly identified and get a million dollars. Those of us whose cognitive worldview is such that we will, in that moment, pick both, are being correctly identified and get a thousand. No one so far is getting zero. No one so far is getting $1,001,000.
The decision each person makes is both what seems a rational choice made in the moment, as much as every choice is, and an expression of our essential characters, which is predictable.
“Should” is immaterial. It is the simple fact that those of us who pick only the opaque box are getting a million dollars and those who pick both are getting a thousand. No precognition required.
If you pick the opaque box only it has been able to read that your nature was such that you were going to do that; and likewise for the other choice. My essential nature given that set up is to be that person.
You have much more faith than i have.
I’m looking at the empirical evidence presented to me in the hypothetical. It seems to me the only reason not to trust that evidence is faith in something else (like a belief in libertarian free will that’s impossible for a computer to predict).
I’m pretty sure it’s going to peg me as a two boxer, if it’s that accurate, because i fundamentally don’t trust it
A lot of people have expressed this, but I don’t really see where it’s coming from. What’s there to distrust here? Are there a bunch of rich sickos watching through a one way mirror excited at the prospect of me falling for it and taking an empty box?
Right. This is essential character. All the empirical evidence in the world that a thousand, that a hundred thousand of times before … every time someone has picked one box they’ve gotten a million and when someone has picked two they’ve gotten a thousand, isn’t swaying them. That essential character trait is at least theoretically knowable. And the computer was able to sort them into their basket ahead of time.
how to “be that person”… not “that person who picks one box”, but “that person who the computer thinks will pick one box”, which are not the same thing.
Returning to this. Empirically they have been the same thing every time. There has not been even one case when they were not.
Yes. And all the people who “won” previously are probably fake. I’ve seen people win 3 card Monty, too.
It’s a flipping hypothetical in which we are told, as a postulate, they are not fake.