The Sleeping Beauty paradox

If the experiment always gives the same answer, then the number of observers has no effect at all. The number of observers only matters if an observer could be wrong.

Okay, but prior to running the (expensive) experiment, would you say that we don’t need to because we already know the answer with trillion-to-one certainty?

No, the experiment should be done because the theory makes such a strong and testable claim.

Try out this variant:

Original experiment. SB is told that if she guesses correctly whether the coin was heads or tails every time she’s asked, she’ll win some money. If she guesses correctly and it came up heads, she’ll win $100. If she guesses correctly and it came up tails, she’ll win $95.

How will a smart Sleeping Beauty guess?

Assuming any time she’s asked she has a 2/3 chance of guessing correctly with a “tails,” then if she guesses tails every time, she has a 2/3*2/3, or 4/9, chance of winning $95, right? That gives her an expected winning of $42.22 (I think). She has a 1/3 chance of being right with a guess of heads, giving her an expected winning of $33.33. From this analysis, it looks as though it’d be smart for her to guess tails every time.

But I think if you conduct the experiment, a SB who guesses heads every time will come out slightly ahead.

What do folks think?

You were wrong to multiply 2/3 * 2/3 to calculate chances of winning with “tails”; the two events being conjoined (be correct on day 1 with a guess of tails, and be correct on day 2 with a guess of tails) are not statistically independent.

As you note, and as is obvious from the setup, if she guesses heads every time, she will gain $100 per run, and if she guesses tails every time, she will make $95 per run. There’s no reason to suspect otherwise, except a basic probabilistic error.

If she always guesses the same, the 4/9 calculation is incorrect because the guesses are not independent events.

In fact, if she makes the same guess each time, then she has a 1/2 chance to win the prize. So she should always guess heads, because that prize is larger. But 2/3 of her guesses will still be wrong.

(In response to my post two above)
Sorry, I fucked up that penultimate line. Ahem: If she guesses heads every time, she will gain an expected $100/2 per run, and if she guesses tails every time, she will make an expected $95/2 per run.

In this case, I think we can say that we’re in T[sub]2[/sub] with trillion-to-1 odds, even if the experiment has the same result every time*.

Deciding whether to run the experiment is a question of values and economics, not of probability. Since we don’t have that many resources, the (IMHO) better course of action would probably be to conduct other experiments that might have more life-changing outcomes first, and save this one for a later time.

Left Hand of Dorkness, I’m pretty sure that you need to adopt the experimenter’s 1/2-1/2 probability of the coin flip to find an expected profit.

*There’s a small complication to this: the odds are only trillion-to-one if all observers are isolated from each other. For example, if we did a census and counted a trillion trillion and one observers we would know with certainty that we are in T[sub]2[/sub]. However, since we only know of less that 7 billion observers (7 * 10[sup]-13[/sup]% of a trillion trillion) I think we can safely neglect this from our calculation.

See, that’s what I thought. But then, if our cursed sleeping beauty always guesses that she’s not cursed, won’t she be right in half of the experiments?

Ooh, maybe that’s the difference: are we talking about half the waking occurrences, or half the experiments? If the former, she’ll never be right (under the infinite curse); if the latter, she’ll be right half the time.

That’s exactly the difference.

The only reason the presumptuous philosopher shouldn’t get the Nobel is that, come on, the scientists don’t really know that one theory predicts 10^24 observers and the other 10^36. They could be wrong about anything, so they need experiments and observations to put them on solid ground, and even then…

BUT… if somehow they were certain (about the predictions, about the nonexistance of alternate theories, about the philosophical nature of an observer, about everything)… then yeah. The presumptuous philosopher would be right.

Being a 1/3er is just common sense. Whenever something happens to you (eg you get a disease), there’s some chance you have something rare or unusual, but most likely you’ve got the same thing as everyone else. So stop telling you have brain cancer. Very few observers on this planet do. You’ve got a sinus infection.

Why are universes with more people in them to be considered more probable? The evidence I have, by virtue of reflection upon my existence, is just “There is at least one person in the universe”; why does it result that there are more likely a million people than just the one?

It’s not that more populous universes are more probable, it’s just more probable that we are in the more populous universe. An analogy: suppose you have 15 pennies and 6 boxes, and you mark one of the pennies so that it represents “you.” Now, randomly place one penny into each of the first five boxes, and put the remaining 10 pennies in the sixth box. It’s 2/3 likely that you are in the sixth box, even though boxes with large populations represent only 1/6 of all boxes.

It’s like that with the universes. Tyrrell McAllister posited that we live in a multiverse, with equal numbers of universes that have 10[sup]24[/sup] and 10[sup]36[/sup] observers. Since the population of the packed universes are so much larger than the less-packed universes, it is much more likely that we are in one of the packed universes.

But, as Alex_Dubinsky points out, this is all conditioned on something that will almost certainly never be true, i.e., that they are only two types of universes and that we know how many observers are in each type with a high degree of certainty. Abscent this conditioning, we can’t say much of anything about universes with more people being more or less probable.

This is begging the question. It’s 2/3 likely that I am in the sixth box if I am drawn at uniform distribution from among all the pennies. It’s 1/6 likely that I am in the sixth box using a distribution which draws uniformly from among all the boxes, and then from the pennies within. It’s 3/37 likely that I am in the sixth box if some other distribution is used. And so on…

So why prefer the probability distribution which is uniform over pennies to the one which is uniform over boxes (or to yet another one still)?

This wording seems a bit unclear to me, relative to the initial problem. I thought she would have a 1/2 chance at $190 if she always guessed tails, but that seems to not be the intention, or is it?

Continuing the response to Dr. Love:
Let’s do it another way: When God creates the world, he flips a coin and accordingly decides either to create a universe with one man and a thousand women (in the heads case) or one woman and a thousand men (in the tails case). Everyone is stored in separate rooms, etc.

You wake up in this universe, aware of the setup, and can determine your own gender but nothing else. After so doing, what probability should you assign to the coin having been heads?

If the right answer depends on whether you’re a man or a woman, why? What relevant information would a man know that a woman wouldn’t (or vice versa)? (In both case, they’d already know, thanks to the setup, before even checking themselves, that at least one man exists and at least one woman exists).

Now, supposing we swap “man” with “conscious person” and “woman” with “unconscious person”, does anything change?

Actually, I’d like to change that last line to make the analogy I am attempting to draw clearer:

Now, supposing we swap “man” with “(conscious) person” and “woman” with “(non-sentient) rock”, does anything change?

The odds that you’re a unique and special flower are, in both cases, 1 in 1001, correct? So before looking at yourself, you can state with certainty that whatever you are, that’s most likely the common gender. Statistically speaking.

And supposing god created one of each planet, and everybody on both planets used this logic, then you’d have 1000 men who were right, 1000 women who were right, 1 man who was wrong, and one woman who was wrong. Hmm, sounds like this logic gives you the expected level of above-average certainty.
If you knock out (or petrify) all the women, you get 1000 men who were right, and 1 man who was wrong. (And a lot of unusually quiet women.) The level of certainty seems unaffected.

We choose the distribution over pennies because we are pennies, not boxes or anything else.

Like begbert2 says, if you bet with your own gender you’ll be right 1000/1001 times. On the other hand, if you bet that men dominate, prior to examining your own gender, you’ll be right 50% of the time.

Well, now, supposing we take the exact same situation and add some communication: every night, at some point during during what would otherwise be your sleep, you are woken and paired with someone of the opposite gender to have a conversation. Afterwards, both your memories of the conversation are wiped (so no one can figure out “Hey, that’s a new face!”) and you’re sent back whence you came to get a good night’s rest.

So one morning you wake up and see that you’re a man and you’re convinced “Wow, this means the coin almost certainly came up heads”. That night, you’re paired with a woman. You try to convince her of what you’ve been able to deduce, that the coin almost certainly came up heads (since you’ve observed yourself, in the morning, to be a man). But she, in turn, tries to convince you of what she’s been able to deduce, that the coin almost certainly came up tails (since she’s observed herself, in the morning, to be a woman). What’s going on here? Each of you thinks you have strong evidence for the coin flip going one way or another, but when you present it to the other, they wouldn’t find it convincing. Does that mean it wasn’t actually legitimately strong evidence in the first place?