Challenging Probability Question

Cabbage,

You say that as the number of plays increases, the ratio of winnings of Game 1 and Game 2 will not converge. You also say that we can say nothing about the comparative rates of convergence to infinity of the average winnings of each game. Does this mean that after, say, a large finite number of plays of the games, we can say nothing about which game’s total winnings will likely be greater (because the average winnings of each will be bouncing around unpredictably toward infinity)?


If so (and here I move from probability to decision theory), is there a reason to prefer one play of Game 2 to one play of Game 1? [If not, is there a reason to prefer one play of Game 2 to the Game 3, which is the actual outcome of Game 1 plus a dollar. (Game 3 obviously must be prefered to Game 1)?]

Yeah, I would say there is a reason to play Game 2 over Game 1. Over a course of many games, the probability that Game 2 will give a larger overall payout is greater than the probability that Game 1 will (simply because it does have double the payout, after all). Over this course, the actual results won’t follow that probability exactly, and there still certainly exists the possibility, however unlikely it gets to be in the long run, that game 1’s payout will even be larger than game 2.

Overall, I guess what it amounts to is this: The ratios won’t converge. However, the ratios will probably have a tendency to stay within a certain range (for example, the ratio may tend to stay between 1/4 and 3/4, but not actually converge to anything). It would be difficult to actually give a possible range for the successive values of the ratio, and the corresponding probability that the ratio will remain in that interval for a given length of time, but I think it could probably be done. I’m not going to try to do it though. :slight_smile:


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

At last I have something modestly useful to say…

The programs posted above for empirical testing of this problem will work well enough for small numbers of flips, usually about 30,000 depending on your compiler and the initial seed you use. The trouble is that the random number generator built in to most programming tools is pretty sad. We can’t get really random numbers from a computer, so the best we can hope for is a pseudo-random sequence. The easiest way to get such a sequence is a linear congruential generator (see Knuth, Seminumerical Algorithms for the gory details), and all such generators produce a sequence which eventually repeats. Most compilers use a signed 16 bit int for results, hence the 30,000 limit. We can do a little better without undue exertion by using an unsigned long (32 bits in Visual C++, your compiler may vary) and constants cribbed from Numerical Recipes in C. The period is still only 1.7 million, but if we’re doing runs of a few hundred thousand flips that’s plenty.

The sample code below has a better_rand() function and a test of the period length for better_rand() and rand(). If you compile this with Visual C++ you’ll notice that of you seed rand() with 55 you’ll get a period just over 3,000…

Of course a better simulation still misses the point-- if you’re free to play either game forever your winnings per game will keep increasing as you play. Still, I can’t pass up the opportunity to spread the word about bad random number generator.

#include <stdlib.h>
#include <iostream.h>
const unsigned long RAND_M = 1771875;
const unsigned long RAND_A = 2416;
const unsigned long RAND_C = 374441;

unsigned long better_rand_current;

void better_seed(unsigned long seed) {
better_rand_current = seed;
}
unsigned long better_rand() {
better_rand_current = (RAND_A * better_rand_current + RAND_C) % RAND_M;
return better_rand_current;
}
int main(int argc, char* argv)
{

unsigned int ct;
int base;

srand(55);

ct = 1;

base = rand();
while (base != rand())
	ct++;
cout &lt;&lt; ct &lt;&lt; endl;
cout &lt;&lt; RAND_MAX &lt;&lt; endl;
unsigned long better_base;
better_seed(1);
better_base = better_rand();
ct = 1;
while (better_base != better_rand())
	ct ++;
cout &lt;&lt; ct &lt;&lt; endl;
return 0;

}

Splat,

Very interesting. That explains, I take it, why it is that when I run these coin-flipping experiments a million times on my computer, the outcomes tend to be more consistent than probability theory would predict.

If you can’t trust your computer to act unpredictably, who can you trust?

Tony

You can trust your computer to act unpredictably. It does it whenever you don’t want it to!

Yes, I agree (and apologize if my wording was off) that, given two divergent infinite series, one cannot necessarily conclude that the ratio is defined or not. I wanted to make the point that merely “cancelling” out the factor (3^n) is NOT the way to determine the ratio.

Cabbage, thanks for the insights. You’ve explained very well why, in simulating a small finite number of tries (like a million), you would get results that are fairly consistent … you probably haven’t run enough tries to get a string of 100 heads, for instance.

On random number generation, that’s also a good point, that computer simulation may give limited insight.

Computer simulation can give a good indication, but it doesn’t PROVE anything.

Yes it is the way. It’s called the limit test, you take the limit as n goes to infinity of infinite series divided by another and you can get a consistent limit even if neither of the series converge on their own. If the limit exists then the ratio of the two series converges. This is elementary calculus.

Although it is true that the limit test will not work for ANY 2 series, it will work for these two.

No mater how you rewrite it you won’t get anything but 2:1. If you rewrite 2 as 1 + 1 you have just doubled the number of terms. You’re making the false assumptions that by taking the first n terms of each series and dividing by each other will give you the same result as using the sums of the series.

Splat: The reason the computer simulations are getting different answers is because the error increases as you increase n. If you want to get the same answers with a finite number of terms then you’d have to calculate the sum of the probability times the payoff. That’s just (1/2^n) times the series.

Konrad: Would you tell me the limit of SUM[(3/2)^n] as n tends to infinity from 1?

jcg sez:

Need I say more?

Konrad said:

Since you’re urging me to take the limit of an infinite series, yeah, you need to say quite a bit more about what that limit is and what this ‘limit test’ of yours comprises.

I’m assuming you’re thinking of L’Hospital’s rule, where

lim f(x)/g(x) = lim f’(x)/g’(x).

My source, Mathematics from the Birth of Numbers by Jan Gullberg (which I recommend highly, by the way, even though it makes no mention of the ‘limit test’) says:
‘After differentiation the quotient sometimes still appears in indeterminate form. The rule is then applied–in a repetitive manner-- to the quotients of the
derivatives’.

So lim (3^n)/2(3^n) =
lim(n)(3^(n-1))/(2n)(3^(n-1)) =
lim (3^(n-1))/2(3^(n-1)).

This is the same form as the original we wanted to reduce, so it’s still indeterminate, so on the assumption the
limit of the ratio exists it can’t be found by this method. I say the ratio of the payoffs for the 2 games is indeterminate. If you say otherwise, I’m sure you’ll post again. Next time I’d appreciate content over
attitude.

Konrad:

I’m not sure what it is you’re referring to as the “limit test”, but I do not believe it means what you think it means.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

jcg sez:

First of all, the limit test applies to the quotients of two series not to the limit of one series, which is what you first asked and that is why I had to quote my own post.

Second of all, the limit test applies to the quotients of two series and not to the quotient of two functions like L’Hôpital’s rule does. You can’t apply L’Hôpital’s rule to series in the way you are.

Cabbage sez:

Let’s a random elementary calculus book here… Calculus of a Single Variable by Larson.

Page 577 under the heading of “Limit Comparison Test” in chapter 8 “Infinite Series”.

“Suppose that An > 0, Bn > 0 and
lim(An/Bn) = L as n -> infinity
where L is finite and positive.
Then the two series SUM(An) and SUM(Bn) either both converge or both diverge.”

Conversely, it is obvious that it must be possible for the quotient of two divergent series to have a finite limit.

Technical note: To be perfectly accurate this proof works under the assumption that the limit of SUM(An)/SUM(Bn) from 0 to n as n -> infinity = An/Bn. In this case it is obvious that it is so but if you don’t believe me here is the proof:

CKDextHavn’s objection was that 2 could be rewritten as 1 + 1 thus changing the rate of divergence of the series. This is not so because when dealing with series we always use limits. The limit of
SUM(2) from 0 to n
= lim SUM(1 + 1) from 0 to n
= lim SUM(1) from 0 to 2n (<= this 2n is the critical part)

As n goes to infinity SUM(1) from 0 to 2n will diverge just as quickly as SUM(2) from 0 to n.

You can’t just plug in infinity into n and assume that SUM(3^n) = SUM (2*(3^n)) as CK did.

Therefore once the factor of 2 is removed each series diverges at an equal rate no matter how you rewrite them. Therefore the ratio of their sums is just the ratio of their terms.

<< You can’t just plug in infinity into n and assume that SUM(3^n) = SUM (2*(3^n)) as CK did. >>

Sorry, Konrad, but “just plugging infinity into n” is EXACTLY what I was saying NOT (repeat, NOT) to do.

The point about rewriting 1 + 1 + 1 … is NOT that it diverges any faster or slower by being re-written, but that it DIVERGES. And that the “ratio” of (1 + 1 + 1 + …)/(2 + 2 + 2 + …) is undefined. That ratio is NOT 1/2, that ratio is defined, just as 0/0 is undefined; and you cannot simply “cancel” the infinite divergent sums.

Now, the Limit Theorem says that if LIM (An/Bn) = L as n --> infinty, then SUM(An) and SUM(Bn) either both converge or both digerge.

It is certainly possible for SUM(An) and SUM(Bn) both to diverge and yet the quotient converges… but that does not mean that ALL such do. ((And that’s not the converse of the statement; the converse is that if one of SUM(An) or SUM(Bn) diverge, then the lim(An/Bn) must diverge.))

Examples of where the quotient of two infinite divergent sequences produces a convergent sequence would be An = a^n (for a > 0) and Bn = n! The series SUM(a^n/n!) is convergent, but (say a>1) both a^n and n! are divergent sequences.

Also please note, and this is the key: there is a difference between SUM(An/Bn) and SUM(An)/SUM(Bn). We are dealing in this example with SUM(An)/SUM(Bn) which is the quotient of the sums. We are NOT here dealihng with SUM(An/Bn) which is the sum of the quotients.
The two are not equivalent, except in the trivial cases.

Konrad: Thanks for elaborating.

You say in describing the limit comparison test:

I don’t get your point. Let’s say your application of the test is correct. So the ‘series SUM(An) and SUM(Bn) either both converge or both diverge’.
But that’s not in question: we agree they’re divergent.

You say

I’m not using series. The function for the value of the nth partial sum of game 1 payoff is f(n) = 3[(3/2)^n - 1]. For game 2, g(n) is twice that. The limit of their ratio as n approcaches infinity is indeterminate and repeated application of L’Hospital’s rule yields repeated indeterminate results. So the ratio of the payoffs is indeterminate.

No it’s not undefined. The ratio of SUM(1)/SUM(2) from 0 to n is 1/2 no matter how you rewrite it. You can rewrite SUM(1)/SUM(2) from 0 to n as SUM(1/2) from 0 to n and then take the limit as n goes to infinity. This is perfectly “legal” arithmetic.

You said:

I said:

Notice the word possible in there. I was just saying that since it’s possible that contradicts your statement that the ratio of the two doesn’t have a limit just because they are divergent. What you said was:

According to my book infinity over infinity can be defined. In fact there’s plenty of ways that it can be defined. For example the ratio convergence test for infinite series relies on the fact that infinity over infinity can be greater or less than 1.

Again, the limit of 0/0 can be defined. L’Hôpital’s rule is just one way of doing this.

Yes, that’s why I added my little technical note. Like I said it’s perfectly legal to rewrite [SUM(A) from 0 to n]/[SUM(B) from 0 to n] as
SUM(A/B) from 0 to n
Your trick of rewriting 2 as 1 + 1 would just change the limits of the sum.

Again, if you don’t believe me look up the ratio test. You take An/(An+1) as n goes to infinity. In 90% of cases you’ll end up with something like 3n/5n. By your logic this test would never work because both 3n and 5n are infinity therefore the ratio would always be undefined.

Konrad:

You’re using the limit comparison test wrong; it says nothing about the ratio of the sum of two infinite series.

The technical note is wrong. Take these series:

  1. SUM (1/n^2) = (pi^2)/6
  2. SUM (1/n^4) = (pi^4)/90

Take the ratio of the second over the first. The technical note claims the ratio of these infinite series is (1/n^4)/(1/n^2) = 0 as n goes to infinity (the ratio of corresponding terms). I think it’s obvious that the ratio of the infinite series is actually [(pi^4)/90]/[(pi^2)/6)].


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

Ignore that part. SUM(A)/SUM(B) doesn’t equal SUM(A/B). Don’t know why I wrote that.

Here’s an even easier proof that it converges to 1/2:

SUM(2*(3^n)) = 2*SUM(3^n)

SUM(3^n)/[2*SUM(3^n)] = [SUM(3^n)/SUM(3^n)]/2
= 1/2

There. I don’t think it’s possible to get it any simpler than that.

And the technical note is right. I didn’t say it applies to any series just the series we’re working on.

But the sum is infinity; infinity is not a real number, and you can’t just say that inf/inf = 1. And, specifically, the two sums are “independent” of each other, one corresponds to game 1, the other to game 2.

In other words, the fact that the expected payout is infinite means that, if you play either game over and over, the average winnings will increase without bound. But we don’t know how fast it will increase (I suppose you could also find an expected rate, but, still, that wouldn’t be concrete, just a tendency over many repetitions. The point being that one game’s average winnings may increase faster than another’s, and not at a constant rate, either. As I said before, if you look at the ratio of

(SUM i=1 to n) (3/2)^i, and
(SUM j=1 to n+1) 2*(3/2)^j,

take the limit as n goes to infinity, and try to calculate the ratio we did previously we don’t get 1/2. But both are still the exact same sums we had before, it all comes down to how they’re represented (sure, for each finite n, there’s an extra term in the second sum; that simply corresponds to the fact that, on a given series of trials, the average winnings of game 2 may increase quicker than that of game 1 (and I don’t mean just double). Different representations give different limits, therefore there’s no limit for the ratio of these infinite series.

A more intuitive idea: I feel pretty confident that if you modelled this on any computer, regardless of memory or speed, eventually it would “seem” that the ratios are converging (I don’t think it would necessarily be to 1/2, though).

I think there will probably be arbitrarily long periods when the ratio becomes “relatively constant”, so to speak. After each of those long periods, though, there will be another game which screws up the “convergence” and kicks the ratio out to some other value. It may seem to settle down at .5 for a while; then along comes a huge payoff, changing the ratio to .6, where it may seem to settle down for another long while.

I think this will happen because after several plays, the winnings of each game will be so huge, that the normal 2 toss, 3 toss, 10 toss, whatever, will do virtually nothing to change the ratio. The winnings for an individual game are boundless, though. Every time the ratio seems to settle down, the total winnings are still finite. So one game with a really huge payoff is all it takes to change the ratio and eliminate convergence. And we’re playing the games over and over without end, so this will happen again and again.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

I already said that it doesn’t matter if the games are independent of each other. We’re not calculating the results of a specific game but the probability of payout for each game.

We have SUM(probability of a result)(payoff for the result)

Since the probability of getting to n flips for each game is the same the ratio just reduces to SUM(payoff for game 1 of n flips)/SUM(payoff for game 2 of n flips) from 0 to infinity

And will you stop saying infinity over infinity is undefined? I already gave examples (like the ratio convergence test) where it isn’t so unless you can prove that calculus is wrong I don’t see how you can say that.