coin tossing to infinity

Another question about the nature of infinity here (sorry). As I understand it if you keep on do something for an infinite length of time then any possible outcome will occur no matter how ridiculously improbable that outcome is. The following problem occurred to me the other day and I have not yet decided whether this applies.

Imagine that you start with the number 1000. You then flip a coin, heads you add 2 to the number, tails you subtract 1. My question is this. If you flip the coin an infinite number of times will the number ever be reduced to 0? Assume the coin is perfectly balanced.

Part of me thinks that the number will never be reduced to zero as although the coin flipping goes on to infinity the probability of reaching zero decreases with each head flipped and what I know about statistics tells me that the numbers of heads and tails should balance out the longer the coins are flipped and the number should increase by an average of 1 for every two coin tosses.

However at each flip of the coin it remains possible for the subsequent flips to take the number back to zero so presumably if you toss an infinite number of coins this will eventually happen.

I’d personally conclude that zero would be reached but it does seem slightly counter intuitive. Are there any mathematicians on the board who could contribute a more informed opinion.

IANAM but I have wrestled with this issue myself and have even tried to get some firm conclusion on the matter, without success to this point. So I share your interest in it.

I think there are two “rules” or “laws” in effect that seem to conflict with each other. One is the “Law of Large Numbers” which supports the idea that in an infinite number of tosses, as you say, any sequence is possible, including the one that takes 1000 to 0. To complete the thought, though, one would have to appeal as well to the “Laws of Probability” in which the odds for any individual toss are 50/50 for heads (or tails). That would suggest an equally likely run of tosses that could go to 2000, or even -1000.

IIRC this oddity is also called “The Gambler’s Dilemma.”

I hope a more knowledgeable Doper can help clear this up.

I don’t think the number will approach zero as a limit. The ratio of heads to tails will approach 1, but the difference between heads and tails will continually increase. The probability is 1/2 that the number of head will be greater and likewise for tails.

The gamblers dilemma, as I recall it is that even in a fair game, one gambler is certain to go broke. The gambler with the most money going in will wind up with all of it most of the time.

You can work it out by looking in the chapter about ramdom walks with absorbing barriers.

You’re describing what is known as a “random walk”. (For more, you might also want to search on “Markov chain”.) Yes, there is the possibility the number will be reduced to 0. However, it is improbable.

For a Java applet demonstration, see here:
http://stp.clarku.edu/simulations/one-dimensional-walk.html

For a simple explanation of a 2D random walk, see here:
http://scifun.chem.wisc.edu/WOP/RandomWalk.html

Kramer

Your question basically amounts to asking “will it ever happen that I get tails 500 times in a row?”

The answer is yes, it is possible. The higher a number you start with, the less likely it is you will get it down to zero, but it could happen.

Try it yourself and see it you can get five tails in a row. It can happen, but it may take a lot of trying. Now imagine 500 times, and things look pretty sad. But, 500 out of an infinite number of attempts is possible.

As jkramer points out, what you have described is known as a “random walk.”

**

Since it’s kinda hard to do something for an infinite length of time, perhaps a better way to ask the question is as follows:

If I do something long enough, can I make the probability of some outcome occuring arbitrarily close to 1?

And the answer to this question is “no.”

In the case of the random walk you describe, in theory one could calculate the probability P of getting home in X coin flips or less. As X approaches infinity, P does not approach 1.

By contrast, consider the probability P of getting at least one “head” in X coin flips or less. As X approaches infinity, P does approach 1. So it’s reasonable to say that if you flip a (fair) coin long enough, you’re bound to get heads sooner or later.

I’d like to amend my answer slightly. The reason for this amendment is simple. I think my original answer was wrong.

The marker starts out at 1000 and you move two spaces right, or larger, for heads and one space left, or smaller, for tails. You are just a likely to go left as right but you go right twice as far to the right. In an infinite number of tosses, runs of any length of all heads or all tails are equally likely. So I think that the number can become zero and the probability that it will do so is 1/3.

There is a non-zero probability that you’ll hit 0 eventually, but it’s not 1.

The way to approach this problem is through a recurrence relation. p[sub]k[/sub], the probability that you’re at k at some point, is equal to p[sub]k+1[/sub]/2 + p[sub]k-1[/sub]/2. p[sub]1000[/sub] = 1, and we want to know p[sub]0[/sub].

The characteristic polynomial for this recurrence is x[sup]3[/sup] - 2x[sup]2[/sup] + 1, which has roots 1, [symbol]f[/symbol], and [symbol]-1/f[/symbol].

So, p[sub]k[/sub] = [symbol]a[/symbol] + [symbol]b[/symbol][symbol]f[/symbol][sup]k[/sup] + [symbol]g/(-f)[/symbol][sup]k[/sup] for some constants [symbol]a[/symbol], [symbol]b[/symbol], and [symbol]g[/symbol].

I’ll let the other dopers check my work while I try to figure out what those constants might be. I wouldn’t be surprised if it works out that p[sub]0[/sub] is 1/3.

I’m sorry David , but I’m pretty sure that can’t be correct. Think about it this: how would your analysis have changed if the OP specified that the starting number was 10? According to your analysis, the answer is still 1/3, but I think we would all agree that you would have a much better chance of getting to zero if you start at 10 than if you started at 1000.

Mort is almost correct, but missed an important point. It’s not quite true that you are looking for the probability of getting 500 heads in a row. You’re looking for the probability that in N tosses of the coin you have gotten T tails and 500+2*T heads for the first time. All you need is to get to zero…it doesn’t matter if you get directly there from the starting place.

lucwarm has pretty much got it. I started actually calculating probabilites, but it got ugly real fast and I don’t have the time to finish…sorry.

Other people have also said that this problem is very similar to the “Gambler’s Delimma” or “Gambler’s ruin” problem. The classic result here is that even if you are playing a fair game (prob of winning=1/2) with an opponent that has an infinite bank, you will go broke with probability 1. See, for example, example 1.3.3 in the book “Markov Chains” by J.R. Norris. It is interesting that the problem is very similar in 2 dimensions (random walks on a plane), but it changes dramatically in 3-D. Try looking up “recurrance” as a property of Markov Chains (ex. in section 1.6 of the book mentioned above) if you are interested.

I’m sorry David , but I’m pretty sure that can’t be correct. Think about it this: how would your analysis have changed if the OP specified that the starting number was 10? According to your analysis, the answer is still 1/3, but I think we would all agree that you would have a much better chance of getting to zero if you start at 10 than if you started at 1000.

Mort is almost correct, but missed an important point. It’s not quite true that you are looking for the probability of getting 500 heads in a row. You’re looking for the probability that in N tosses of the coin you have gotten T tails and 500+2*T heads for the first time. All you need is to get to zero…it doesn’t matter if you get directly there from the starting place.

lucwarm has pretty much got it. I started actually calculating probabilites, but it got ugly real fast and I don’t have the time to finish…sorry.

Other people have also said that this problem is very similar to the “Gambler’s Delimma” or “Gambler’s ruin” problem. The classic result here is that even if you are playing a fair game (prob of winning=1/2) with an opponent that has an infinite bank, you will go broke with probability 1. See, for example, example 1.3.3 in the book “Markov Chains” by J.R. Norris. It is interesting that the problem is very similar in 2 dimensions (random walks on a plane), but it changes dramatically in 3-D. Try looking up “recurrance” as a property of Markov Chains (ex. in section 1.6 of the book mentioned above) if you are interested.

The fact that we’re allowed infinite time makes me suspect that it really is 1/3. Remember, in a truly random infinite binary sequence, you expect to see runs of arbitrary length. A run of tails long enough to get you back to 0 is to be expected, although not within any finite time.

Besides, you’re trying to appeal to intuition in the solution of a mathematical problem. We all know how far that goes.

The trouble is that as time goes by you tend to get more and more hopelessly far from the goal. So by the time you get a decent run, it probably won’t help you.

**

Sometimes it helps, sometimes it doesn’t. But as somebody pointed out, the odds of ever making it home from 1000 are probably a lot less than if you start out at 10. I doubt it’s anywhere near 1/3.

Sorry for the double post above.

Ultrafilter , you’ve got me thinking. I’m not sure I follow your recurrance relation - can you elaborate a little? This is how I tried to set it up at first, but I couldn’t figure out some of the details. For instance, for any k<=0, p_k=0 (since k=0 is an absorbing state). Also, I understand that p_k = 1/2 (p_(k-2) + p_(k+1)), but I don’t follow what you wrote with the normalizing factor of p_1000 only multiplying one term. Can you explain?

Also, how are you doing subscripts? I tried the <sub> tags, but they just appeared as the tags themselves when I previewed.

Without working it out, I think the probablility of ever getting to 0 is tiny. Not zero, but tiny. Think about it. After a certain number of tries the number of tails has to be 1000 plus twice the number of heads. This cannot happen until there have been at least 1000 tosses. But as the number of tosses goes to infinity, then with probability 1, the number of tails less the number of heads is less than the square root of the number of tosses (IIRC) and ratio approaches 1.

The probability of doing it in exactly 1000 tosses is 2^{-1000}. It can’t happen in 1001 or 1002, while the chances of its happening in 1003 is 1003*2^{-1003}. After 1006, it is
C(1006,2)*2^{-1006} and so on. I think I might be able to estimate the resultant sum.

I won’t go into the details, but I compute that the sum of that series is less than 2^{-999}, which is roughly 10^{-300}.

In an infinite number of trials there is no difference between 10 and 1000.

Well, the whole concept that if you keep doing something for an infinite amount of time, improbable things will happen is probably a bit flawed in the first place, but especially in this example.

In a different thought experiment from the OP, let’s suppose you do the same thing, but you only throw your coin 1000 times. Then you see if you have reached 0 or not. Then you start over, with the marker at 1000, and throw another 1000 coins, etc. Now, you have an infinite number of trials, with an infinitesimal chance of success in each trial. To be specific, the chance of success in a trial is (0.5)^1000, or 9.3*10^-302. So the chance of failure, in one trial, is 1-0.5^1000. And the chance of failure after N trials is (1-[0.5^1000])^N. Now, that number does tend to zero as N tends to infinity, so yes, if we did this for ever, chances are on one of the trials our marker would get to zero. I’ll leave solving that to figure out how many trials would be needed to approach, say, 50/50 odds of at least one success to someone else :slight_smile:

Now, back to the OP: here, we only have one trial, but it goes on forever. And the chance of success decreases with time. Suppose the first 1000 tosses yield a perfectly believable 563 heads and 437 tails. Our marker is now at 1000 + (2*563) -437, or 1689. So to get to zero, we have to throw a run of 1689 tails, which is less likely than the initial 1000 tail run. I’m not sure how to calculate this one, but it’s possible the decreasing chance of success outweighs the total number thrown, and the probability of not reaching 0 never starts trending towards 0

Interesting analysis. And I’m not at all hung up on mine since there have been rare occasions when I was wrong, wrong, wrong.

However, in the OP as stated we do not reset the marker to 1000 after each 1000 tries. We start from wherever it is.

For example, in the case of starting from 10 it is just as likely that we will wind up at 30 after the first 10 throws as that we will wind up at 0, and vice versa.

I am so very much not a math guru but I would like to ask something here.

Are we talking about ONE infinite series of coin tosses, or an infinite number of coin toss sets?

If you look at the latter, it seems obvious that eventually, one of the coin tossing sets will reach zero.

The more interesting question is the former: if you are only allowed one single set of tosses–even if you can toss an infinite amount of times–can you reach zero? The problem being that the higher the number goes, the less chance there will be to return to zero.

I like your analysis the most so far. I think you’re close, but slightly off. The probability of it happening in 1003 tosses is 1003 × 2[sup]-1003[/sup], because there are 1003 chances to get the single head. However, if that head happens on toss #1001 or 1002 or 1003, you’ve already counted this combination, because you’ve already reached 0 once. Therefore, I think the second term in the sum should actually be 1000 × 2[sup]-1003[/sup]. Similarly the third term should be C(1003, 2) × 2[sup]-1006[/sup]. And so on. The total, actual probability sum would be then:

2[sup]-1000[/sup] &times (1 + 1000 / 2[sup]3[/sup] + C(1003, 2) / 2[sup]6[/sup] + C(1006, 3) / 2[sup]9[/sup] + … )
= 2[sup]-1000[/sup] SUM from x = 0 to infinity (C(997 + 3x, x) / 2[sup]3x[/sup])

This is a bit much for my calculator, but you can generalize it for any initial value R:

2[sup]-R[/sup] SUM from x = 0 to infinity (C(R - 3 + 3x, x) / 2[sup]3x[/sup])

(I checked this with the ratio test, and it converges for all R.) For R = 6, it seemed to be converging to a value around 0.0691.

Disagree. Let P (x,y) be the probability of ever making it to x if you start at y. You are claiming that P (0,10) = P (0,1000)

But it seems to me that P(0,1000) = P(0,10) * P (10,1000).

So P(10,1000) = 1 if what you say is true.

by shifting coordinates 10 spaces, this means that P(0,990) = 1.

And since P(0,10) >= P(0,990) >= P (0,1000) (because it can’t hurt to start closer to the goal),

you are, in effect, claiming that a return to home is certain from any starting point.