I think the idea is that everything that happens after you end up a dollar down is independent of everything that happened up to that point. And I think that can be made to work, but it needs a little bit of details filled in.
Edit: I can fill in the details just from realizing that Indistinguishable is leaning on independence, but it would probably be helpful for the rest of the thread to get it spelled out.
How does one lose $x eventually? Well, first you eventually lose a dollar; this has some probability p. Then, after you’ve done so, you eventually lose another dollar; this also has probability p, even conditioned on what’s come before (for it hardly matters what’s come before at this point; this is, as you say, the independence property). Then, right after you’ve done all that, you eventually lose another dollar; again, probability p. You do this x times, coming out to a total probability of p^x.
That was an error on my part for breezing through the last part that I thought no one would be interested in. I should have said we known A+B = 1.
Frankly I am always amazed at the length of discussion on SD about probability problems (like the three doors) which seem to engender so many intuitively-must-be-correct answers that are wrong but are constantly brought up again and again even after they’ve been proven wrong. I’m glad this thread hasn’t had any of that vehemence.
It’s exactly like that: the probability of sequence A followed immediately by sequence B is the product of their individual probabilities [in a “memoryless” situation like our coin-flipping]. So the probability of X many sequences of the form such-and-such in a row is exponential, with base the probability of that sequence form.
In the P(X straight losses) cases, the sequence form of interest is “one tail”.
In the P(eventually losing X dollars) case, the sequence form of interest is “a finite sequence of heads and tails whose cumulative tally is one more tails than heads, but never reaching this until the end”.
You’re in the .999 = 1? thread. Haven’t the actual mathematicians there been telling 7777777 over and over that he can’t treat infinity as a number? You’ve done it so long he’s picked it up and parrots it back as if it somehow proves his point.
Of course infinity can be manipulated with the proper tools. When have I ever said otherwise? What I do say is that you cannot think about infinity in the same way that you think about finite numbers. Which is what every mathematicians here says with more precision and notation.
In the larger picture, my observation is the opposite of yours. It is used to start a discussion of the proper procedures for infinity, and never as an answer in and of itself.
Sorry, I messed you up with my late editing, so my example doesn’t correspond to your example, let’s stay with yours because you did the math, as they say:
So you are saying that P(HHTTT in some order) = ½?
I don’t get it P(HHTTT in that order) = ½[sup]5[/sup] and there are 5C2 such runs containing 2 heads. So:
P(HHTTT in some order) = ½[sup]5[/sup] x 10 = 0.3125
Well, the problem is with the implication you claim should be drawn, that therefore
That’s just the wrong conclusion to reach (as the course of this thread has demonstrated). Also, I disagree that you can’t use ‘normal procedures’ to give an answer. It’s not that there is some set of mathematical procedures that are ‘normal’, but which you can’t apply to infinity, and there’s some other, ‘special’ case of procedures that the initiated can use to manipulate it. Rather, you generalize what you have, and see what still works, and discard what doesn’t anymore; you do precisely what you would do with any other mathematical concept (and in the right context, this includes treating infinity as a number).
This may be true in some general sense, but as in the analogy in my previous post and as in this specific problem, it is decidedly not true if a well-defined relationship exists between quantities as they tend to infinity, like the “tendency to a limit” in calculus. The point being, as Half Man Half Wit points out, that although you start out with a plausible enough generality, I think you do end up with the wrong conclusion in this instance.
Where the infinite sum is of all strings of heads and tails that have one more tail than head and all consecutive substrings that start on the left have at least as many heads as tails.
No, that’s not what I’m saying. I apologize for being unclear.
I am saying that P(X many blahs in a row) = P(a single blah)[sup]X[/sup], for any set of sequences considered “blahs”.
In the relevant case, let’s say a sequence is a “dollar-loser” if it meanders around doing whatever the hell it wants till eventually reaching a net one dollar loss, stopping as soon as it first does so. So, for example T is a dollar-loser, and so is HTT, and so is HHTHTTT. But H is not a dollar-loser [because it never reaches a net one dollar loss], nor is HTTHT [because it doesn’t stop as soon as it first hits a net one dollar loss].
Then we can say P(X many dollar-losers in a row) = P(doller-loser)[sup]X[/sup].
P(dollar-loser) will be P(T) + P(HTT) + P(HHTHTTT) + all the various other ones. This happens to come out to 1/2 with our weighted coin. But even if it didn’t, we would still have P(X many dollar-losers in a row) = P(doller-loser)[sup]X[/sup].
And this is also the probability of eventually losing a bankroll of $X, as the only way to do so is by having X many dollar-losers in a row [it would be different if one could lose more than one dollar at a time, but one cannot].
ETA: Oh, I see where the confusion was coming from [thanks to Lance Turbo]. It also turns out that P(dollar-loser) = 1/2 in this example. Lance Turbo perfectly explains what I’m saying.
Even though this isn’t what I’m saying, keep in mind:
You’ve forgotten that our coin is biased. Heads are twice as likely as tails. So P(HHTTT in some order) is actually (2/3)^2 * (1/3)^3 * 5C2 (approx. 0.1646).
Yes, of course (if the blahs are independent, which in this case they are).
Ah, so I *really *had misunderstood you before, and I had previously missed your post #8. Now I’m caught up…
Still some things I am missing, the polynomial equation yielded by the the recurrence and by the relation P(X losses) = p[sup]x[/sup] (might as well call P(1) p, right) is
Easy enough to verify that 1 and 1/2 are roots of that equation, but how do we know those are the only roots? For instance 0 is a root that never gets a mention.
Also, the question of p being equal to 1, we can’t dismiss that out of hand in the proof of P(100) <> 1, can we?
I just know I am being dense now. Apologies for that.
No need to apologize; no one understands everything till they figure it out.
0 is not actually a root: let’s plug in x = 1 to make this simpler, getting p = 2/3 p^2 + 1/3. Note that p = 0 is not a solution of this. As a quadratic equation, this has just the two roots you’ve noted: p = 1 and p = 1/2.
As for why we know the relevant root is 1/2 and not 1, this is on general grounds that tell us the relevant root will be the smallest one >= 0. [In fact, we know much more: the relevant solution to the recurrence relation f(x) = 2/3 f(x + 1) + 1/3 f(x - 1) with f(0) = 1 will be the pointwise minimum of all the solutions everywhere >= 0]. An argument for this is spelt out in post #23, but I might still aspire to making it clearer.
Ah, more confusions (on my part), I had hastily remarked that:
p=0, p=1/2 and p = 1 are solutions to :
p[sup]x[/sup] = 2/3 p[sup]x+1[/sup] + 1/3 p[sup]x-1[/sup]
and supposed that they were the solutions you spoke about.
Now I see your two roots for the quadratic given when x = 1, I need only to assure myself that your p = 1 is not possible, I shall read the relevant posts. Thanks so far.
Just for laughs, I ran some Poisson simulations where Losses outnumber Wins by 100 (i.e. one goes bust) when the individual probability of a Loss is 1/3.
Though it’s remotely possible to go bust, you’re hugely more likely to die of old age first.