A search for “anthropomorphic” or “anthropic principle” in this thread turns up nothing (sorry, didn’t actually read the whole thing).
Why??
The issue in this riddle is exactly like that of the fundamental physical constants in the universe. Their probability of being favorable to life (or anything of note) is extremely low. Yet given the fact that we’re alive, if we were to ask what their values are the answer would be, “improbably perfect.” That’s the anthropic principle, and the constants are just like the coin you’re talking about.
A search for “anthropomorphic” or “anthropic principle” in this thread turns up nothing (sorry, didn’t actually read the whole thing).
Why??
The issue in this riddle is exactly like that of the fundamental physical constants in the universe. Their probability of being favorable to life (or anything of note) is extremely low. Yet, assuming that we find ourselves alive, asking what their values are would yield the answer, “improbably – perfect.” That’s the anthropic principle, and the constants are just like the coin you’re talking about.
Mind sharing how you got that answer? I’m getting something different
[spoiler]Say SB guesses heads with probability p. Then, if the coin flip landed heads, she wins the prize with probability p; if the coin flip landed tails, she wins the prize with probability 1 - p[sup]2[/sup] (the probability that she doesn’t guess heads both times). Since the coin is fair, the total probability that she wins is (p + 1 - p[sup]2[/sup])/2, which is maximized at p = 1/2.
On the other hand, p = phi - 1 is the frequency of guessing heads for which she has equal chance of winning the prize regardless of whether the coin flip was actually heads or tails. But 1-p[sup]^2[/sup] decreases faster than p increases (for p > 1/2), so phi - 1 cannot be the solution[/spoiler]
You’re right - I did entirely the wrong thing. Even though I knew what ought to be the right answer, somehow I tricked myself into thinking that couldn’t be it, and I can’t explain why I solved it by setting them equal instead of summing the probabiities.
It didn’t. It changed the odds of something happening in the future. She changed the odds from [50% cursed immediately/50% cursed after waking] to [50% cursed immediately/50% suicide after waking]. Causality remains undamaged.
Just to please Alex_Dubinsky, the philosopher Nick Bostrom has written a book called Anthropic Bias, which includes a discussion of the Sleeping Beauty paradox. His solution is superficially different from mine. However, I have the sneaking suspicion that when I work out the implications of my view, it will turn out to be a equivalent to a fragment of his general theory of anthropic reasoning.
Bostrom rejects the standard thirder position that many in this thread are advocating. (I don’t believe that his criticisms apply to the case in which I myself am a thirder; see post #1.) Indistinguishable already presented the essentials of Bostrom’s critique with his “God flips a coin” scenario.
ETA: Just to elaborate a bit: Bostrom argues that the standard thirder argument relies on the premise that hypotheses that predict more observers in your reference class are more likely to be true. Without this assumption, the thirder argument just doesn’t go through. (For SB, the “observers” are the successive versions of herself on the days of the experiment.) This premise is what the “presumptuous philosopher” story is meant to criticize.
I’m being a little flippant about causality, but I’m not sure you’re correct about what happens.
Before she stabs herself, her correct prediction of the probability is that there’s a 100% she was cursed yesterday (assuming that in the original question, the thirders are correct). After she stabs herself, there’s a 50% chance she was cursed yesterday. That’s pretty counterintuitive.
There’s nothing at all counterintuitive about it. There is a 50% chance that she is cursed immediately, and a 50% chance that the curse takes effect when she next wakes up, if she does nothing to interrupt it. By committing suicide, she interrupts it.
Hell, ignore the coin flip, and just look at one of the two possibilities:
She is hit by a curse with a 100% chance of taking effect when she next wakes up. When she next wakes up, she kills herself before the curse can take effect, reducing the chance to 0%. This does not violate causality, it just means you forgot that there were ways to interfere with that 100% chance of success.
50% - Is cursed, goes to sleep. Does not wake up
/
100% <
\ Commits suicide.
50% - Is not cursed, goes to sleep. Wakes up, is cursed. <
Does not commit suicide, goes to sleep, does not wake up.
I don’t think anyone here is claiming that the number of observers is affecting the probabilities of the initial coin toss. Instead, we’re saying the number of observers of a given hypothesis affects the fraction of all observers who are correct.
Lemme see if I can explain the original thesis in a different way.
There are four possibilities:
It’s Monday, and the coin is heads, and she’s asked the question.
It’s Tuesday, and the coin is heads, and she’s asked the question.
It’s Monday, and the coin is tails, and she’s asked the question.
It’s Tuesday, and the coin is tails, and she’s asked the question.
Each of these has the same probability of occurring–except for the second one. On Tuesday, if the coin is heads, she will not be asked the question.
So if she’s asked the question, there are three possible events, each of which has the same likelihood of occurring.
It’s like Monty Hall from this perspective. Her recalculation of the odds IS based on additional information: she has the information that she’s being asked the question.
As counterintuitive as it is, I think I have to go with the 1/3 answer.
The 100% chance of being cursed is dependent on the expected infinite number of wakings. If this is interrupted, the 100% simply doesn’t apply. If she knew, for instance, that Prince Harming will come and rescue her after two days then she wouldn’t care about the curse. Altering the current situation is obviously new and relevant information with respect to calculating the probability of future events.
It would be far smarter for her to choose an arbitrarily small value x, and kill herself with probability 1/x any time she wakes.
Don’t get too down on yourself. Your general reasoning is right, it just takes a bit longer to take effect.
Say SB gets woken once for heads, and n times for tails. If n is one, then SB has the same chance of getting the prize regardless of her guessing strategy. For n > 1, the general solution is that she should guess heads with probability p = 1/n[sup]1/(n-1)[/sup] in order to maximize her changes of winning. So she should favor a guess of heads, just after one more step.
The problem with the “presumptuous philosopher” argument is that we have no basis for a prior probability on whether we are in universe T[sub]1[/sub] or T[sub]2[/sub], in the way that SB knows that the coin is a fair coin. We could claim that the odds are 50-50, but that reflects only our uncertainty about which is the correct theory, not anything about the underlying nature of the universe. Really, the 50-50 for T[sub]1,2[/sub] shouldn’t reflect anything about the truth of the universe, since if we repeated the physicists’ experiment many times, we hope we would get the same result every time–unlike repeating the coin flip many times.
IMHO, it is counterintuitive, not only because probability is counterintuitive, but also becuase infinity is counterintuitive. On the other hand, SB could avoid this problem entirely by being a bit more subtle about her strategy. Instead of killing herself any time she wakes up, she should only do it with some probability, say p. If she isn’t cursed, she only offs herself needlessly with probability p; and if she is cursed, she still (eventually) kills herself 100% of the time. The time it takes for her to end the curse shouldn’t matter to her, so she just needs to choose p small enough to be acceptable.
What is it with me and screwing up my math in this thread?
x is arbitrarily large, or following Dr. Love, just choose a p arbitrarily small.
This involves another weird point of infinity when working with probability - the most ‘rational’ value of p would be 0, but she must choose some finite value in order for there to be any possibility that the event will occur.
And thanks for pointing out that the general trend in my experiment was correct, as long as you add more wakings in the tails case; I realized last night I ought to have pointed it out.
Okay, suppose it’s determined that we live in a multiverse in which exactly half the universes following theory T[sub]1[/sub] and half follow T[sub]2[/sub]. The physicists are about the run the experiment that will determine which kind of universe we live in. Let’s suppose that the experiment is very expensive. Should we forgo the experiment? Do we already know that we’re in a T[sub]2[/sub]-universe with trillion-to-1 odds?
Let it be stipulated that the result of the experiment would always be the same. (It’s supposed to be decisive, after all. Which ever kind of universe we’re in, we’re always in the same one, so the experiment will always come out the same.) Why does it make a difference whether we perform it once or a trillion times?
ETA: Not to mention that we don’t know how many times the observers in the other universes are running the same experiment.
Actually, I still think the experiment has successfully teased two events out of what seems like one. The probability of her actually being in the heads-flow versus the tails-flow is .5. The probability of her experiencing the heads-flow at any time you ask her is 1/3.