give me a second, I want to revise this post
Ok. But instead of saying “My gender is chosen at random, and then the gender of the first person I meet is automatically made the opposite to match”, why not just as well say “It’s the gender of the first person I meet which is chosen at random, and then my gender is automatically made the opposite to match”? So that instead of thinking “I’m male? There are far more males in heads-world, so the coin probably came up heads. Meeting a female gives me no new information on top of this, so I’ll stick with the conclusion of probably heads”, you instead start out by thinking “The first person I meet is female? There are far more females in tails-world, so the coin probably came up tails. Learning that I’m male gives me no new information on top of this, so I’ll stick with the conclusion of probably tails”?
[In the above, in case it’s not clear, by “at random”, I mean, of course, “by selecting from among everyone in the universe with equal probability”]
In this scenario, you know two genders, yours and the other person’s. You know that your own gender wasn’t ‘selected’ for you; even if you were selected for the experiment, the fact that you happened to be the member of the experiment you actually were just happened. (As opposed to you being the guy the next cell over, I mean.) If you decide to ignore the fact that you are just one of the crowd, you are ignoring information.
Whether or not you’re one of the common gender or the special flower, you know that the female was specifically non-randomly chosen to match with you, because you were told that you would be meeting with a female before the experiment began. The fact is you can reasonably say that both people, you and her, were chosen specifically for the other from the limited set of the opposite gender. (Suppose that everyone was promised they’d meet an opposite gendered person; in each case the pairing had to be non-randomly set up, though with a one-person set on one side, selecting the member from that side was pretty easy.) And because these parings are non-random, and don’t rule out any of the possible scenarios, they can’t give you new information or change your estimate of the odds. However, they also don’t change the separate fact that you are a red ball randomly selected from a bag with a 1 in 1000 chance of selecting the rare ball, so that consideration remains.
It occurs to me now I should have used a different example to try asking some of the questions I was originally getting at. I’ve been caught up in my own stupid mistake and confused things needlessly; I apologize. To wit, the setup should have been to have the coin choose between (heads: 1 man and 1000 women) vs. (tails: 10 men and 1 woman).
Now, if I wake up and discover myself to be a man, what’s the probability that the coin was heads?
Eh, I’m not sure “mistake” is the way to characterize it. The old setup was good for asking some questions, but the way things have gone, this new setup is the one to use to ask the questions I want to ask now. Well, whatever. Things are so far removed from where they started, it’s hard to remember what was being used to make what point.
You’ve just exceeded my feeble powers of probabalistic math (which are more based on intuituve situational understanding than, uh, math). I’d bet the odds still favor the 10:1 having happened, though, although I couldn’t tell you how much.
And yeah, shifting analogies get hard to keep track of pretty quickly.
Yeah. Probably best to be up front with what I’m trying to do at any given moment. The thing I’m trying to illustrate now is, if one is saying the 10:1 situation is favored, is one now casting aside the principle “Universes with more people in them are a priori correspondingly more likely”? (Did you actually subscribe to that principle? I forget; well, Dr. Love and various other people did, explicitly or implicitly).
Not me. I figure that, given that two officially equally likely universes and no other information, the universes are equally likely. The fact that you are a thinking being in one of the univeses tells you only that you are in a universe with at least one thinking being - it’s a self-filtering set, given that if you were not a thinking being, you would not be asking the question. If there were two possible universes, one with one man and a thousand rocks, and one with one rock and a thousand men, then waking up as a man tells you nothing, since your chance of ‘waking up as a rock’ (or ‘waking up as nonexistent’) was zero either way, so your odds of waking up as a man, given that you woke up at all, were 100% either way.
Edit - actually, wait a minute - I just convinced myself that the larger universe is more likely. Let me think about this…
Blast it, I think you just broke my brain. The situation naturally filters out the impossible cases (like you not existing, or like you being a rock), but you do have to consider the likeihood of your event occuring, per situation. So in the original (as in, the original original situation, we have: a coin is flipped: based on the coin flip, either one ball is put into the bag, or two balls are put into the bag. Then one ball is drawn from the bag - what does it tell you about whether there is another ball still in the bag?
It tells you nothing. Because your odds of drawing one ball from the bag either way were 100%.
You just turned me into a halfer. Blast you.
Which doesn’t change the fact that if there are both men and women, in 1000-to-1 proportions depending on a coin flip, finding yourself to be a man is not 100% either way, and the fact that your odds of your being a man do vary depending on the prevalence of men does reflect back on the odds of the ‘1000 men’ situation occurring. (And, stab in the dark, I think finding yourself a man in a word of 10:1 is 100 times more likely than finding yourself a man in a world of 1:1000.)
So, we’re officially at the point where we can no longer just intuit the answers (or, at least I can’t), and we actually have to break out Bayes’ Theorem. We want the probability that the coin was heads, given that I am a man, or symbolically, P[H|M]. We can find some things easily, like the probability of being a man given the coin was heads, P[M|H] = 1/1001, and P[H] = 1/2. The total probability of being a man is P[M] = 0.5 * 1/1001 + 0.5 * 10/11 = 911/2002 (the probability of being a man with either coin flip multiplied by the probability of that coin flip). Put these together to get P[H|M] = 1/911.
So, the actual numerical result is not intuitive, but we can still get something out of it. A while back I made the statement that “We choose the distribution over pennies because we are pennies, not boxes or anything else,” and I stand by it. The difference is that, this time, only men are the pennies.
Some warning: As I was typing the above, I began to wonder: suppose you first noticed that you were alive, increasing the probability of heads to 100/101, and then noticed that you were a man. Running the same calculation, I get P[H|M] = 10/1001, which is close to 1/911, but not exactly the same. I think I’ll have to spend some time thinking about this to maybe have a chance at hammering it out… I have a suspicion that it’s a sampling issue, so your voice wouldn’t count the majority of times when the coin comes up tails.
Ok, great, so I can see what your methodology is. But now, supposing we collapsed the gender distinction (you are unable to figure out your own gender), so that we’re just looking at 1001 people vs. 11 people. What’s the probability of heads?
Well, carrying out similar calculations, we get P[H | M or W] = P[M or W | H] * P[H] / P[M or W] = (1001/1001) * (1/2) / [0.5 * 1001/1001 + 0.5 * 11/11] = 1/2.
Which means heads and tails are equally likely. Which means, even though the one leads to a universe with nearly 100 times as many people as the other, that doesn’t cause it to be any more probable. Which destroys the “More populous universes are correspondingly more likely” principle.
Quantum mechanics is a model that attempts to describe and predict observations in the real universe.
The Sleeping Beauty paradox is math and logic only - it is pure invention and construction, as is all math. Finding an inconsistency in a logical framework is entirely different than in hard sciences.
And usually, there is an accepted resolution to a mathematical paradox (other than, “oh, well, that’s how the universe works. Sucks to be us.”)
As might be inferred from my prior post (though I didn’t actually state it), I concur. More prosperous univerese are not more likely. (Unless they have some other observable factor like questioner-being-male which is not equally probable in both cases.)
(You halferated me. Blast you.)
See, this is exactly where (I think) my sampling error in my last post is coming from. We’re equivocating between two different problems. Let me try to clarify.
Suppose we have two bags, one (bag H) contains one red ball and 1000 blue blass, while the other (bag T) contains 10 red balls and 1 blue ball. A coin is flipped and bag H is selected for heads, bag T for tails. We can discuss two different problems:
Problem 1: The experimenter reaches into the selected bag and hands you ball. At some point in the experiment you may be allowed to check the color of the ball.
Problem 2: You and 1000 other people are taken into separate rooms, each containing a pnumatic tube. You are told that, during the course of the experiment, a ball may or may not come through the tube. At some point in the experiment you may be allowed to check the color of the ball. The experimenters, having selected either bag H or T, distribute the balls to the rooms by randomly placing them in the other end of the tubes, which are then blocked to prevent any person from getting multiple balls.
In Problem 1, which both you and I worked out, getting a ball tells you nothing about which bag was selected (as you said), but getting a red ball decreases your estimate of P[H|R] to 1/911.
In Problem 2, the mere fact that you recieve a ball gives you information about the likely outcome of the coin flip. I get: P[R or B | H] = 1, P[H] = 1/2, P[R or B] = 1 * 1/2 + (11/1001) * 1/2 = 46/91. Then P[H | R or B] = 91/92.
It seems to me that Problem 2 is the correct way to look at these problems of God creating different worlds.
Why is it that you think the approach of your Problem 2 is the correct way to look at the problems of God creating different worlds? (My position, in case it’s not clear, is that there is no privileged “correct” way; just different perspectives for different purposes).
How about Problem 3, which is exactly like your Problem 2, only it starts with “You and 8 million other people are taken…” instead? What makes Problem 2’s perspective more “correct” than Problem 3’s? And so forth…
Here’s something for all the thirders in the thread to ponder (including me, who’s a thirder only provided that a certain technical condition, described in the OP, is met):
First, let’s review the question that the experimenters ask of SB. Quoting from the OP: “What is your credence now for the proposition that our coin landed heads?”
Now, in the Wikipedia version of the paradox, the coin is flipped only after SB goes to sleep Sunday night. But let’s make the insignificant change (I hope!) that the coin is flipped in another room before she goes to sleep on Sunday night. Suppose further that the experimenters also ask her their question on Sunday night, after the coin is flipped, but before she has taken any mind-bending drugs. Are we all agreed that her answer to the question at this time ought to be “1/2”?
Let’s let SB[sub]before[/sub] be the version of SB who’s asked the question before she goes to sleep on Sunday night. Let SB[sub]during[/sub] be a version of SB who’s asked the question after she wakes up on either Monday or Tuesday (it doesn’t matter which, so long as it’s still a day of the experiment).
We thirders are in the strange position of affirming the following:
[ul]
[li]SB[sub]before[/sub] has enough information to be certain about what credence SB[sub]during[/sub] ought to have.[/li][li]SB[sub]during[/sub] has enough information to be certain about what credence SB[sub]before[/sub] ought to have had.[/li][/ul]
And yet we also hold that
[ul]
[li]SB[sub]before[/sub] and SB[sub]during[/sub] ought to have different credences.[/li][/ul]
This strikes me as a very strange position to be in if one holds that the information that one has ought to determine one’s credences. For it then follows from the third statement above that SB[sub]before[/sub] and SB[sub]during[/sub] do not have the same information. Well, that alone isn’t too surprising. After all, one of them is prior in time, and the other got an amnesia drug. But then the puzzler is this: If they do not have the same information, how is it that each can be certain about what credence the other ought to have?
Can anyone think of a scenario not involving things like amnesia, subjectively indistinguishable clones, multiple universes, etc., in which two epistemic agents ought to have different credences in some proposition, but each one knows what credence the other ought to have?
I’m coming in late to this, just read the first half of the first page of thread, so shoot me if desired.
Seems to me it depends on definition of probability. If the definition of probability in this case is:
Times Heads / # Times Experiment Run
then it’s 1/2
If the definition of probability in this case is:
Times Heads / # Times Beauty Asked the Question
then it’s 1/3
Regarding that technical condition, which I just reviewed (sorry, the thread’s wandered a bit since then) - why would the “random calendar” make any difference? If I understand correctly, SB doesn’t know what order the calendar’s symbols will appear in, just that each will appear on a unique day during the course of the longest possible outcome of the experiment. So, supposing it was a one day/two-day experiment, and the symbols were B and C in no particular order, what would she learn by looking at the random calendar and seeing a ‘C’?
As far as I can tell, all the calendar tells her is, “today is day 1 or day 2” - which she already knew before she looked at it. How does that help her any, or change anything?
As far as I can (now) see the basic problem with the thirder argument is that it erroneously assumes that all ‘awakenings’ are equally probable. That is, it says, “when she wakes up, it could be Heads-Day1, Tails-Day1, or TailsDay2 - three options, in one of which heads is true; therefore, the odds of heads is 1/3”. This is of course only true of the chances of each option are equal - which they aren’t. In actuality, there’s a 50% chance that it’s Heads-Day1, a 25% chance that it’s Tails-Day1, and a 25% chance it’s Tails-Day2.
If she doesn’t know what day it is, then clearly, the summed probability of the (single) heads-is-true option is 50%. (And the summed probability of the tails-is-true options is 25%+25%, so 50% as well.) If she does know what day it is, it’s either certainly tails (if it’s day 2), or it’s 2-1 in favor of it being heads, given that it’s a 25% chance versus a 50% chance.
So, once you get past the incorrect assumption of equal probability of all cases, I don’t see any way to add up the probabilities to get 1/3rd chance of heads. Can you point it out to me?
We start with a prior probability of 1/2 that the coin lands heads:
p(coin lands heads) = 1/2,
p(coin lands tails) = 1/2.
Note that, if the coin lands heads, then C only has a 1/2 chance of appearing during the experiment. But if the coin lands tails, then C is certain to appear during the experiment. That is,
p(C appears during experiment | coin lands heads) = 1/2,
p(C appears during experiment | coin lands tails) = 1.
Hence, the prior probability that C appears during the experiment is
p(C appears during experiment)
= p(C appears during experiment | coin lands heads) * p(coin lands heads) + p(C appears during experiment | coin lands tails) * p(coin lands tails)
= 1/2 * 1/2 + 1 * 1/2
= 3/4.
Therefore, the probability that the coin lands heads given that C appears during the experiment is
p(coin lands heads | C appears during the experiment)
= p(coin lands heads) * p(C appears during the experiment | coin lands heads) / p(C appears during the experiment)
= (1/2 * 1/2) / (3/4)
= 1/3.
When SB looks at her calendar, she learns that C appears during the experiment. Therefore, her credence that the coin landed heads should be p(coin lands heads | C appears during the experiment) = 1/3.
Couple of points:
(1) Considering some other simpler paradoxes such as the Boy or Girl paradox, you might think that SB shouldn’t condition merely on “C appears during the experiment”. Instead, you might argue, she should condition on the stronger fact that “C appears today”. I tried to argue in posts 30 and 31 for why I don’t think that that reasoning applies in this case.
(2) But nonetheless, my thinking on this problem, and on credence in general, has gradually become more “problematised” over the last few days. I may soon come to disagree with what I posted in the OP and in this post. It’s certainly not the whole story as it stands. (BTW, Indistinguishable, this state of flux is why I haven’t yet replied to your question from post #2.)