Here’s a question for all the math wizards on the boards. I have a feeling it should be easy enough to figure out, but I can’t remember my stat classes for the life of me.
Let’s say I’m going to flip a coin (w/exactly 50% chance of landing on either face) an indefinite number of times. Projecting from now, before I start flipping, on which flip can I say that I am 90% certain that the number of heads has equalled the number of tails at least once during the progression of tosses? How do I calculate this, short of charting out every possibility?
In case that wasn’t clear, let’s say I’m flipping a coin and using the results to modify x. The initial value of x is zero. Every heads result increases x by one, every tails decreases x by one. On what flip can I be 90% certain that x has reached 0 again at least once?
This article is what prompted me to ask: http://www.rednova.com/news/display/?id=126649#121
I’m trying to write a program that works on a similar principle to the one being utilized in these tests, so I can maybe personally evaluate their validity.
One thing to keep in mind is that you can’t be in state 0 after an odd number of flips, so that complicates things a bit.
I’ll have to think about the problem some more–I know I’ve seen an analysis of it, but I threw away those notes a long time ago. It sounds like a variation on the gambler’s ruin problem; perhaps you could search on that.
Actually, I do know for a fact that the probability of heads and tails being even at some point converges to 1 as the number of flips increase, from having looked at random walks before. So it’s gotta be 90% eventually.
You’re begging the question; is there necessarily any such flip? Your assumption seems to be that as you flip the probability of this mounts up and will reach 90%, but I see no reason why that should be.
After two flips the probability of exactly half heads and half tails so far is 50%. After four it is 6/16 = 37.5%; so as you go on the probability actually lessens.
The question is the same as: how many digits does a binary number have to be before 90% of the binary numbers of that many digits are composed of equal numbers of 0s and 1s? I’m fairly sure there will be no such animal.
Like Cabbage, I’ve seen the analysis that shows that a symmetric random walk starting out at the origin will, with probability 1, return to the origin.
I don’t think I explained my question clearly; I am asking for the point at which I can be 90% certain that the number of heads has equalled the number of tails at least once.
I designed the program so that the statistical operations would be as simple and algorithmic as possible; the whole process resets whenever heads and tails equal out, so I’m really only concerned with progressions that don’t even out at all before they hit a statistically meaningful threshold.
As a seasoned Las Vegas devotee, I think I can say your hypothesis there is missing one important point:
At no time during the series of flips does the size or shape of the coin (object) change. Therefore, assuming the tosses are not designed to ensure one outcome or the other (such as throwing the coin straight up without turning it over), the outcome of each throw is 50/50. This does not change on any throw.
If the probability changed at all, games such as roulette would be able to be mathematically figured out and bet to the player’s advantage using only red and black bets. The green 0 and 00 spaces are the house’s only advantage, if you only bet red or black. If there were no 0 and 00, the house advantage would be zero, but then again, so would yours.
I derived this from the info given on the random walk website given provided by Cabbage. By the time I had bothered to calculate it to 8 flips, I just dropped it into excel and followed the obvious trend on the graph. I believe about 12 or 14 flips should serve my purposes. I think I can feel it out from here based on how much data gets logged.
I’m not a math person, though, in case you couldn’t tell, so I’d love to hear any and all further thoughts on this. Thanks a ton, guys
Actually, Rico, Askance is correct; the probabilities of even numbers of heads and tails after 2 and 4 flips are 1/2 and 6/16, respectively. For example, in four flips, the possibilities are:
HHTT
HTHT
HTTH
THTH
TTHH
THHT
What Askance missed was that the OP is not interested in the probability of even heads and tails at the nth flip, but the probability of even heads and tails at some point during the course of the n flips.
For 4 flips, your answer is .6875, which is 11/16. I believe the right answer is 10/16, but I’m pretty sure that there are are 16 possible outcomes and each has a mirror twin–so the numerator has to be even.
Weird, for 6 flips, there are 2^6 outcomes, but .78515625 times 2^6 is 50.25.
This is closely related to the so-called “Ballot Problem”: if you’re counting ballots in a two-party election, and Candidate A gets m votes out of a total of N, then how many ways are there to count the ballots in such a way that candidate A is always ahead in the counting? The answer (digging out my old combinatorics textbook) is
(N)C(m) * (N - 2m + 1)/(N - m + 1)
where (N)C(m) = N!/((N-m)! m!) is a binomial coefficient. So if you’re looking for the number of possible sequences of length N for which the accumulated number of heads is always ahead of the number of tails, you would just sum over m:
(total number of such walks) = 2 * (Sum from m = 0 to N/2) [(N)C(m) * (N - 2m + 1)/(N - m + 1)]
and the probability of a sequences that “never flips sides” would be this number divided by 2[sup]N[/sup]. (There’s an extra factor of two above to account for the fact that we were only looking at sequences in which “heads” is always in the lead, as opposed to “tails”.)
I get that the number of sequences which do not flip sides (plugging this into Mathematica) is:
N number of sequences prob. of such a seq.
--------------------------------------
1 2 1.0
2 4 1.0
3 6 0.75
4 12 0.75
5 20 0.625
6 40 0.625
7 70 0.5469
8 140 0.5469
10 504 0.4922
12 1848 0.4512
This seems to be at odds with what other people have said above, so maybe I’m misintepreting the question.
What you need to do is calculate the probabilities of “first even appearance” for flips 2, 4, . . .n and see if the sum equals .9.
I don’t believe they will, but I’m uncertain. Calculating first appearance quickly becomes very involved and complicated.
For 2 flips, it’s .5
For 4, it’s .125
For 6, it’s .0625
But my calculation for 8 is .0390625. This result surprised me as I would have anticipted the figure to have been .03125, based on the previous numbers, but I can’t find a mistake in my math.
If the correct figure for 8 is actually .03125, it seems reasonable to believe
the probability for each two additional flips will continue to be hallfed and the sum will never reach .9.
As a side note, while the ratio of tails and heads is likely to become more and more even as the number of flips increase, the absolute difference is likely to grow.
Your calculation is exactly correct. The general formula for determining whether the first occurrence of equal heads and tails is after 2n flips is:
(2n choosing n) * (0.5^2n) / (2n - 1)
The OP asks when the sum of the above probabilities reaches 0.9. The answer is when n=32, or after 64 flips.
This is indeed a variation on the “Ballot Counting Problem”, where one candidate has to lead by one vote after (2n - 1) flips, and have led from the beginning, with the 2n’th flip creating the first tie. It’s discussed on page 122 of Introduction to Probability Models (7th edition) by Sheldon Ross.