It’s much easier to note that, if S_n represents your cumulative change in wealth at time n (so S_0 = 0 and |S_{n + 1} - S_n| = 1), then (1/2)^{S_n} is a martingale. Then you have the full range of martingale theory at your disposal, and killing this problem becomes a simple application of the optional stopping theorem with the appropriate details filled in.
I was with you until this.
Let’s make it simpler. Let’s say I flip a fair coin an infinite number of times. Common sense suggests I’ll get Tails many times (indeed, an infinite number of times). But there’s one infinitely-long result set that has no Tails at all: { Heads, Heads, Heads, … }.
Now, the probability of this set happening is zero. But so is the probability of any infinite set, and the universe has to “choose” one. This is why we can only say it almost certainly won’t happen.
In the OP, there are actually an infinite number of infinitely-long result sets where the player never goes broke.
We can easily be rigorous even without knowing any martingale theory. Let me rephrase what I was saying in post #14:
It’s clear that the probability of eventually losing x dollars is the x-th power of the probability of eventually losing one dollar. So all that has to be done is determine the probability of eventually losing a dollar.
There are two ways one can eventually lose a dollar:
[ul]
[li]One can immediately lose the dollar. Let us say this means of losing a dollar has “rank” 0.[/li]
[li]Alternatively, one can first gain a dollar, then lose a dollar, then lose a dollar again. Let’s say this means of losing a dollar has “rank” equal to 1 + the maximum of the “ranks” of the two noted dollar losses within it.[/li][/ul]
[Basically, we’ve set up a recursive specification of how to lose a dollar, and by the “rank” of a particular means of losing a dollar, we mean the depth to which we have to unfold the recursive clause in matching it against this specification]
Thus, if f(n) is the probability of losing a dollar by a means of rank less than n, we find that f(0) = 0, while f(n + 1) = 1/3 + 2/3 * f(n) * f(n), which is to say, f(n) = T[sup]n/sup where T(x) = 1/3 + 2/3 * x^2.
The probability we are interested in is the limit, as n goes to infinity, of this quantity. As T is an order-preserving continuous function from [0, 1] to itself, this must be the least fixed point of T. This is straightforwardly seen to be 1/2, so the probability of eventually losing $n is 1/2^n.
If the aforementioned least fixed point result is not obvious, here it is (the observation that launches domain theory):
Lemma: if T is an order-preserving continuous operator on a space with a least element 0 and containing limits of all increasing sequences, then
A) 0 <= T(0) <= T(T(0)) <= …
B) The limit L of this sequence is a fixed point of T
C) Every other fixed point of T (indeed, every other solution to x >= T(x)) is >= L
Proof: A) is from T being order-preserving and 0 being a least element. B) is because the limit of 0, T(0), T(T(0)), … is the same as the limit of T(0), T(T(0)), T(T(T(0))), …, and the latter sequence is T applied to the former sequence, and T is continuous. C) is by noting that 0 <= x leads to T[sup]n/sup <= T[sup]n/sup <= x, with L being the supremum of the left-hand side.
Q.E.D.
I don’t see from your post where you’re not with me now… What do we disagree on?
(Note that my next words after the excerpt you quoted were “But that’s not how going bust works at all:”)
Important clarifying words inserted in bold.
If I’ve mis-parsed, apologies. But you said about getting a particular coin flip, eventually, having probability 1. So I thought to my example of the fair coin, you’d say we certainly will get at least 1 tails, and that is contradictory to what I’m saying.
I’d say the probability tends to 1 with the number of tries, which is 1 in the everyday sense (Hey, does that mean .999… and 1 are the same value? Let’s start a thread!). But in terms of thought experiments about infinity, things aren’t so simple as the probability of any given result set is zero.
I think you’re heading off on the wrong tangent here. You don’t have to consider the total number of flips as infinite. I think that’s begging the question in fact.
Obviously, the series has a beginning - you haven’t been flipping coins forever. So the question is whether the series has to have a finite ending or can extend on into infinity.
And I would say the answer is that the series does have to have a finite ending. At some point in the series, even with a biased coin, the total number of tails will equal the total number of heads plus one hundred. And when you reach that point, your money equals zero and the series ends.
Re: Mijin’s last post:
Oh. Well, I do say that the probability of getting a head eventually is 1 (on the standard interpretation of this sort of thing), but I do not say that you are certain to get a head eventually. As I said in my first post, there is a difference between “happens with probability 1” and “is guaranteed to happen”. Is that what you were worried about?
Mathematicians do standardly say the probability of eventually getting a head is 1, though, rather than 1 - some infinitesimal. Is that something you’d take issue with? I’m open to exploring different ways of formalizing this sort of thing, but it’s worth being aware of the standard language around it.
Not necessarily. You could just keep flipping heads all the time. Or other such things. So many other such things that you are very unlikely to ever hit an end, in fact. The cumulative probability of all the ways of hitting an end only amounts to 1/2^100.
Indistinguishable, I think we’re saying the same thing, so I take back my exception.
No, and indeed the problem isn’t with that notation, because as we’ve discussed at length on the Dope, 1 = .999…, therefore 1 = 1 + some infinitesimal.
The problem is with any kind of hypothetical involving infinite tries at something.
Infinity is such a weird concept that lots of things get thrown out the whack, the least of which being that we have to make a distinction between certain and almost certain, even though both have probability 1.
Where does this meme come from, and why do people keep repeating it as if it answered anything? Certainly, there are number systems that don’t feature a quantity called ‘infinity’, but there are examples that do. Likewise, there are number systems that don’t feature a ‘2’, but some do; nevertheless, I don’t see anybody arguing that 2 isn’t a number because it doesn’t exist in Boolean arithmetic.
This whole ‘infinity is not a number’ idea somehow seems to imply that you can’t meaningfully reason about infinity, perhaps as declared by some authority. But that quite simply isn’t the case. Many questions involving infinity can be answered coherently, as this thread shows, for example. Certainly, reasoning about infinity may get counterintuitive, and, depending on one’s predilections, maybe even paradoxical, but that doesn’t imply that there are no meaningful mathematical theories involving infinity.
But every thread touching the subject will involve at least one poster trying to erect some sort of barrier to further questioning that reads ‘Halt! Infinity is not a number. Proceed no further.’, as if then everybody is expected to see the error of their ways and retreat from the path that has led them astray into the forbidden realm of the infinite.
That looks very reasonable to me. What’s the rest of this thread about?
Because infinity refuses to obey all the nice laws that define the real numbers.
Neither does the imaginary unit i, but people do find uses for that.
You’re of course correct to point out that infinity is not a real number, or a natural number, but as I said in my previous post, neither are those the only possible number systems, nor does the fact that infinity is ‘not a number’ under some definition of number actually help clear up any questions. I mean, let’s say we’d agree to consider as ‘number’ only what follows some certain set of axioms—say the Peano axioms for natural number arithmetic. Then, infinity wouldn’t be a number. But what does telling somebody having a question regarding infinity this actually achieve?
Certainly, infinity is often used fallaciously in arguments, and people tangle themselves up in knots about it. But if that’s the case, then it would be better to actually point out the error in someone’s logic—merely pointing out that ‘infinity is not a number’ doesn’t accomplish this, unless somebody were to insist that infinity actually was a number, or use it in a way numbers can be used, but infinity can’t—that is, in the concrete example, applying some Peano axiom to it that doesn’t hold. But even then, it seems that it would accomplish more to just point that out—if someone’s argument hinges on the fact that every natural number has a successor, and applied to infinity yields nonsense, then one can show that infinity doesn’t in fact have a successor, and that thus the argument fails.
Well, one does all that the first few times. But there comes a point when one resorts to the shorthand, “Infinity is not a number,” because typing is boring, especially when one feels ones typing isn’t even being read.
The chances that you will lose your $100 on the first 100 tosses is non-zero, but still a calculable number. But the chances that you will have more than $100 after 100 tosses is very high, and then moves to a point at which you need to lose more than 100 in a row, which pushes your chances of going bust much closer to zero. As the game progresses toward an infinite number of tosses, the odds of going bust progress toward an infinitely small number. The odds in your favor on each toss means that your progress toward zero risk will be a slope faster than your progress toward infinite plays. So the longer you play the lower the risk of loss. and after linfinite plays, the risk becomes zero.
After playing an infinite number or times, your winnings will be infinite, which will take an infinite number of consecutive losses to bust you. And this just generates the paradox of infinities, in which there are a large number of infinities, which are not equal to each other, and there are two different races going on, in opposite directions, toward imbalanced infinities, and you will always be pulling ahead in your race.
Even in a finite number of tosses, your risk of loss keeps diminishing, with the expanding number of consecutive losses required to bust you.
Perhaps what makes this problem counterintuitive is the fact that you’re not doing the same thing an infinite number of times. You’re doing a bunch of different things (probably an infinite number of different things) a finite number of times each. For instance, the first action you take is “Bet $1 on a coin flip out of a bankroll of $100”. The second action you take is guaranteed to not be the same as that first one: It’s either “Bet $1 on a coin flip out of a bankroll of $99” or “Bet $1 on a coin flip out of a bankroll of $101”. These are different actions, because they have different probabilities associated with them: You’re a bit more likely to go broke from $99, and a bit less likely to go broke from $101.
The part I bolded is I think the key to a conceptual resolution. There is no paradox here. The apparent paradox is that we tend to think in terms of an infinite sequence containing all possible patterns, like the fact that an arbitrarily long expansion of π (if it’s pseudo-random to infinity) must at some point contain some preposterously improbable pattern like a string of ASCII values that spell out the entire works of Shakespeare. But the diverging infinities here say that it’s not a fixed pattern that must occur for the player to go bust, but one that becomes longer as the game continues, and eventually infinitely long.
Infinities need not be daunting; integral calculus has no trouble summing an infinite number of infinitesimal rectangles. The probability that someone playing this game indefinitely (or to infinity) will go bust is neither 0 nor 1 but precisely the probability one would compute based on the coin-toss odds and starting balance.
This isn’t a priori clear to me. It’s true given the form of the final, but I don’t think that you can just state it without justification.
I’m not sure where Indistinguishable was coming from either, perhaps they meant P(X straight losses) = ½[sup]x[/sup] but I didn’t want to say in case I was being stupid.