Challenging Probability Question

All of Cabbage’s post are 594, you say? Well, now that you mention it . . . (I’m new to these parts myself).

What I was refering to was this:

Now, I entirely grasp the point that the ratio will not not converge (even if I don’t entirely grasp the math–just starting to read up on it). But even it the ratios don’t converge, there is no reason that something could not be said about the limit of the average ratio or the likely upper bound of the ratio as the number of plays goes to infinity. I don’t think that you would deny this, Zut, but your earlier statement about there being no advantage to playing Game 2 seemed to imply you did.

Now, CDK has just stated,

“Tony, in terms of your latest question, whether we can’t even tell whether Game(1) or Game(2) is better… the answer is in my last post. In a strict mathematical sense, no, we can’t. The expected values are unbounded/infinite for both games.”

Again, this seems to imply that probability theory (that branch of math intended to model our intuitions about expected outcomes in the presence of uncertainty) can’t say anything in favor of Game 2. And again, I don’t think CDK intends to say this much.

The intended interest of my original question was that even where the expected utilities two lotteries were equivalent (same order of infinity) we could meaningfully enquire about the relationship of their actual payoffs (in the long run), and perhaps, say something about the ratio of their payoffs. (At the beginning, I of course was not sure that there would be no convergence.) Even if these ratios don’t converge, I assume that the preference we intuitively have for Game 2 will predictably be manifested over repeated plays of the game.

Tony


Two things fill my mind with ever-increasing wonder and awe: the starry skies above me and the moral law within me. – Kant

I apologize unreservedly. The above comment was totally uncalled for. I’d edit it out, but I believe that moderators should not edit their words, even the very stupid ones.

I is not your fault that our software is inadequate, and it is not your fault that the NASDAQ is cratering.

I will get to each and every multipost this evening, and return you to your thread.

Heh. The comment didn’t go through. Well, suffice it to say that it was wholly inappropriate and fully deserved the above apology.

I will try to post this only once, relying on Manhatt to eliminate the duplicate posts. I tried to do some of that, but the response time was too slow (at five minutes per deletion… well…)

You can create a probability distribution, as follows:
GAME(1):
Probability of payoff of $3 = 50%
2 tails;ity of payoff of $9 = 25%
Probability of payoff of $27 = 12.5%
Probability of payoff of $81 = 6.25%
Probability of payoff of $243 = 3.125%
Probability of payoff of $729 = 1.56325%
Probability of payoff of $2187 = 0.78125%
Etc etc

Thus, you can see that there is a 98% likelihood that the payoff is $2187 or less.

Similarly for Game 2, with a 98% likelihood that the payoff is $4374 or less.

So, what would be a “fair” charge for you to play? Well, if you played 100 times, (assuming that your flips were exactly according to the theoretic distribution) you would probably have won around $4826. So, a
“fair” price would be around $48 per play. A good deal. However (and this is the point of the paradox) … you could have had that wonderful string of 10 tails that would have won you $59,000 or so.

Well… maybe I would deny it. Depends on what you mean. I don’t think you can say anything about the “average ratio” because, just like the game itself, the average ratio is skewed by individual, low probability, high payoff events. (Now, I’ll admit that this is about the edge of my knowledge/intuition/experience, so I’m willing to be wrong here.) However, I think you could say something about the median ratio, which I’d bet was about 1:2.

I would say that if you ran a large number of trials, Game 2 will outperform Game 1 most of the time. You can (I think) even calculate the probabilities of how often this would happen. However, you cannot say by how much. You cannot even say that the total amount won in Game 2 is more than in Game 1.

OK let me explicitly state what I’m calculating. We have two players they each go out and play one game to completion. One player has the first game and the other has the second.

What is the ratio of the expected winnings of player 1 to player 2, for one complete game.

That, I believe, is what we were all talking about from the beginning.

I’m still waiting for a response to my post pointing out the inconsistencies when you flip a coin twice per turn.

Also I think I’ve established that you can have two things that tend to infinity that have a defined ratio. (Just making sure we all agree here, right? Limit test, remember?) So if it is possible you have to prove why it isn’t so in this case. The burden of proof is on you to show why I can’t use regular algebra. It’s one thing to say 3SUM(x) doesn’t equal SUM(3x) but if you say that then you have to give a reason why.

Again, if you respond to nothing else in this post tell me why regular algebra doesn’t work in this particular case. And don’t just say because they’re divergent, I’ve already shown that something which goes to infinity/infinity can work with regular algebra. You can say 2(1 + 1) = 2*2 and I can object that you can’t do that. But that’s not a proof. I can’t just say “You can’t do that” and have a proof.

[QUOTE]
Originally posted by zut:
**The ratio of the payout of these two games is, as was correctly pointed out before, undefined, specifically because the payout is an infinite, unbounded sum. I think everyone is in agreement that the expected payoffs for the two games are:

Payoff[1] = SUM[n=1 to infinity][(3/2)^n]
Payoff[2] = SUM[n=1 to infinity][2*(3/2)^n]

which, of course, assumes that there are NO time or money limits; i.e., it is technically possible that a game could last for one flip, or ten flips, or a googol flips.

Now, consider the following:
P[2]/P[1] = S[1-inf][2*(3/2)^n]/S[1-inf][(3/2)^n]
where I’ve used some notational abbreviation that is hopefully obvious. I think everyone agrees with this statement. The problem comes in when you do this:

P[2]/P[1] = S[1-inf][2*(3/2)^n]/S[1-inf][(3/2)^n] = 2*S[1-inf][(3/2)^n]/S[1-inf][(3/2)^n] = 2

However, I can also say (pay attention here!) that

P[2] = S[1-inf][2*(3/2)^n] = S[1-inf][(3^n/(2^(n-1))] = S[1-inf][3*(3/2)^(n-1)] = S[0-inf][3*(3/2)^n] = 3 + S[1-inf][3*(3/2)^n]

so now
P[2]/P[1] = (3+S[1-inf][3*(3/2)^n])/S[1-inf][(3/2)^n] = 3/S[1-inf][(3/2)^n] + 3*S[1-inf][(3/2)^n]/S[1-inf][(3/2)^n] = 0+3

So, since 2=P[2]/P[1]=3, then 2=3. But 2 does not equal 3, so there must be something wrong here. The fallacy is the assumption that ratios of infinite, diverging sums are defined. QED.

Zut,

I’m not convinced by this reasoning (even though I am strongly inclined to the conclusion that the ratio of average winnings will not converge). The problem with your reasoning is that P[2]/P[1] is the ratio of the expected winnings on a single play. I’m willing to agree that’s undefined. The issue, however, is not what is the ratio of the expected winnings; it’s what is the expected ratio of the actual winnings. Perhaps these are the same, but I don’t think you have proved it.

By the way CKD, the real point of the StP paradox is not that the value of the lottery is greater than any finite number (because you can get an unlimited string of tails), but that the value of the lottery is greater than any finite value and the actual return will always be less than expected, i.e., a finite amount. Computing expected utility in the usual way (sum of discounted possible returns) will always result, paradoxically, in defeated expectations.

Tony


Two things fill my mind with ever-increasing wonder and awe: the starry skies above me and the moral law within me. – Kant

Konrad:

In the variation you mentioned of flipping the coin twice per turn, the expected payout is still infinite, (SUM) (3^n)*[3^(n-1)/4^n]. We’ll still have the same problem.

I realize you can easily rig up a game where the expected payout will be finite, if that is what you were trying to do, so, for the sake of argument, I’ll assume that you did.

That would be a BIG difference though. One way to interpret it is this way:

  1. If the expected payout is finite, the average payoffs will tend to converge as more games are played.

  2. If the expected payout is infinite (what we’ve been looking at), the average payoffs will tend to increase without bound as more games are played.

Earlier, December noted:

Konrad countered:

…but that’s precisely an example of where it’s not true!

lim (x/2x) = 1/2, but

(lim x)/(lim 2x) is undefined, (inf/inf).

An important thing to note here is that the former (the limit of a quotient) is easy to deal with, while the latter (quotient of limits) is not. The mistake you seem to be making is that you take the latter, think (mistakenly) that it’s an indeterminate form, and proceed to rewrite it as the former, use algebra, and find a limit which never existed to begin with.

An indeterminate form, for example, would be lim [f(x)/g(x)], where both f and g go to infinity. This is not the same as [lim f(x)]/[lim g(x)]. The top is infinite; the bottom is infinite. In this case, we’re not comparing “rates of increase” or anything, just attempting to divide infinity by infinity, period. It’s undefined.

In the problem we’ve been dealing with, we’re trying to make sense of the ratio of the two expected values, both of which diverge to infinity. In other words, we’re trying to take the quotient of the limits, which, just as in the above example, is undefined.

Read a calculus text. You can break up a quotient of limits into the limit of the quotients only if all limits exist. (“Exist” implying they must be finite, as well).

To top it all off, even if we try to make sense of it by rewriting it as a limit of quotients (which, really, we can’t do to begin with, but, for the sake of argument, I’m willing to go along with here), we can get different values (2, 3, …) as pointed out before, by algebraic manipulation of the sums.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

Konrad:

I’ve reread some of your posts and think that I finally see the approach you’re taking to the problem:

I take your last sentence to mean: After playing one game of each, calculate the expected ratio of the game 1 payout to the game 2 payout. Is that a fair interpretation?

OK, lets try to calculate the expected ratio. To find the expected ratio, we need to take a possible ratio, multiply that by the probability that it actually occurs, then sum over ALL possible ratios, correct?
Say we get m flips for Game 1, n flips for game 2. The ratio for this possible payout is:

(3^m)/[2*3^n].

The probability this will happen is:

(1/2^m)*(1/2^n).

The expected ratio is therefore:

(SUM over m)(SUM over n) {(1/2^m)(1/2^n)} * {(3^m)/[23^n]}

However, this diverges to infinity. To see this, fix n to be 1, and sum over m. It should be clear that this is smaller than the expected ratio. But this is a geometric series with common ratio 3/2, so it diverges.

Therefore, the expected ratio diverges. It doesn’t exist.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

OK I think I see the problem. You’re still thinking of independent games.

But we’re not talking about lim(x)/lim(2x). We are talking about lim(x/2x).

We are not comparing the total amount of money you would win in one game or the other if you played to every possible value of n. We are comparing the expected payoff ratio for one game.

Don’t know why you’re doing that… The obvious formula for calculating it would be:
SUM(probability of a result)(payoff for the result for game1)

SUM(probability of a result)(payoff for the result game 2)
Where both sums go from 0 to n as n goes to infinity

Since the probability of getting to n flips for each game is the same we can just ignore that part. Basically it comes down to SUM(payoff for game 1)/SUM(payoff for game 2) from 0 to n as n goes to infinity

Now the reason I can say that we’re summing both from 0 to n instead of summing them independently is because I’m not just talking any two random games here. I’m talking about the probable ratio which is found by that first formula. It’s because those two series are both being summed by the same n that I can say they have a ratio.

It’s like one person travelling a 1 meter a second and another a 2 meters a second. An infinite number of seconds later the ratio of their distances is

SUM(1)/SUM(2) from to 0 to t as t goes to infinity. Sure both sums are divergent but since both sums are linked to the same n they have a ratio. One guy obviously went twice as far as the other.

Oh yeah and the flipping the coin twice per turn thing:

The expected payoff for one game would be:
SUM(1/4^n)(3^n)= SUM((3/4)^n) = 1/(1-3/4) = 4

Likewise for the second game the expected payoff would be 8.

Here’s the 2:1 ratio is obvious. You may say that I’ve changed the game so that the payoff converges now. Yes, but I haven’t changed the ratios of the payoffs. I’ve made the game harder but each player’s winnings relative to the other guy hasn’t changed.

First, for the game of flipping the coin twice per turn: The game was you flip the coin twice each time, and don’t win until two heads come up, correct?

Then, for a given n, the probability of winning 3^n is:

(1/4)*(3/4)^(n-1).

You have to lose n-1 times (3/4 chance each), then win the nth time (1/4) chance. The expected value of the payout is as I said it was earlier–a divergent sum. However, as I also said before, this is a minor point, I do realize you could come up with a game with a finite expected payout, but it’s irrelevant; the whole point here is that the expected payout is divergent.

I said:

Konrad said:

I’m treating the ratio as a random variable. Do you know what the definition of the expected value of a random variable is? It’s

(SUM over ALL possible values) (Value)*(Probability of that value)

It may seem intuitively obvious to you to use only the ratios that are 1/2 (or whatever it is you’re doing), but how can you possibly justify doing that? I’m merely using the definition, do you have any justification that I shouldn’t be?


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

Two comment on a thread that has gone on much longer than I anticipated. (Longest thread of the month at least, I think.)

  1. Konrad writes, “It’s like one person travelling a 1 meter a second and another a 2 meters a second.”

Excuse me, but my original problem is so obviously not like this profered analogy that it barely seems useful to state why. But: in Konrad’s example, at every point in time, it is certain what the ratio of the distiances will be; in the “real” problem, the ratio of winnings will be jumping all over the place. Some convincing arguments have been advanced why the ratio won’t settled down. (Look back to some of Cabbage’s posts around post-40 in the thread).

Second (addressed primarily to Zut), I greatly doubt that the ratio of the expected returns of each game (where “expected return” is defined as the sum of all the possible returns discounted by their associated probability) has anything to do with the answer to the problem. Zut (and perhaps others) seems to think that because this ratio is the ratio of infinity/infinity and so is undefined, the expected ratio of total winnings, as the number of plays increase, is also undefined. But consider this example:

Game 1: Payoff 3^n, if n flips of fair coin.
Game 2: Payoff (3^100n!)!, if n flips of fair coin.

Here to the ratio of expected returns is also infinity/infinity, but I strongly suspect that (the total winnings of Game 1 after p plays)/(the total winnings of Game 2 after p plays), as p increases, will converge to zero (and not be undefined).

Tony


Two things fill my mind with ever-increasing wonder and awe: the starry skies above me and the moral law within me. – Kant

Konrad: << Again, if you respond to nothing else in this post tell me why regular
algebra doesn’t work in this particular case. >>

Regular algebra does work, but you’ve misstated it.

Let’s start with the definition of the infinite sum: SUM(An) as n–> infinity =
lim(n–> inf)[PARTIAL SUM(j = 1 to n)(Aj)]

Let’s use limn to mean the lim as n–> inf; and let’s use PSn(Aj) to mean the partial sum of the terms Aj, with j = 1 to n. So the infinite sum SUM(An) = lim(n–>inf)[PSn(Aj)] … damn the inability to write normal math notation!

OK, Konrad, when you assert SUM(2An) = 2SUM(An), you are asserting that lim(2PSn(Aj)) = 2lim(PSn(Aj))… and this is only true (epislon-delta proof) IF THE LIMITS EXIST.

OOPS! That’s what I get for telling my mother how to work the TV/cable remote while I’m typing math stuff.

Please adjust the last paragraph of my comment above. It’s not the SUM(2An) = 2SUM(An) that’s the problem. The problem is that the limit of the quotient (or product) is not necessarily the quotient (or product) of the limits.

Easy example: Take n–>infinity, then look at:
lim(n) = infinity, unbounded, undefined;
lim(1/n) = 0;
But lim(n*1/n) = 1.

Normal algebra doesn’t always work. It doesn’t work when you are dividing by zero (for instance); and it doesn’t work when you are dealing with infinite quantities and treating them as if they were real numbers.

Well don’t have much time to respond now cause it’s exam time. I’ll get back to this after I’m done with my semester but my main point is that you guys are still talking about 2 unrelated games here. When we’re talking about the expected ratio we take into account every possible game so the series are not seperate in the first place. It’s exactly like that problem with two people walking at different speeds.

For when you get back:

No, it’s not, because their speeds are constant. That’s not at all analogous to what we have here.

To set up the runner’s problem the same as the original problem we would need it this way:

We have two runners, A and B.

A flips a coin until it comes up heads (for the first time, on the nth flip), then advances 3^n yards.

B flips a different coin until it comes up heads (for the first time, on the mth flip), then advances 2*3^m yards.
There’s a big difference between this and having the runners run at constant rates.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

If the crux of the matter is the question of whether the games are independent or not, then I guess we’re in agreement.

IF the games are dependent (for example, one coin is tossed until it comes up heads, and then a payout based on that coin is give to the Player of Game 1 and the Player of Game 2), then the ratio is clearly 1/2.

IF the games are independent (one coin is tossed for Player of Game 1 and ANOTHER coin is tossed for Player of Game 2), then there is no defined ratio. I think the OP has made clear that this is the situation he meant.
And, yes, Tony, thanks for clarifying what I meant to say. The point of the paradox is NOT that either game has an unbounded expected value; it’s not hard to drum up a game with infinite expected value. The paradox is that, with this infinite expected value, “common sense” says that you wouldn’t pay $3000 per round to play this game.

And, just in case there’s confusion, I am not saying that lim(An)/lim(Bn) is NEVER equal to lim(An/Bn), I am saying that you can’t take that for granted.

My example of multiplication was incorrect. SUM(2An) = 2SUM(An) isn’t the problem; it’s dividing SUM(An)/SUM(An) that’s the problem. Like dividing by zero, dividing infinity by infinity is undefined and can lead to paradox.

And now this thread is finally done, and I’ll expect Konrad’s apology for saying that I was pre-Riemannian. I did have a class under Zygmund, back in the 60s, but it was the 1960s.

Konrad, you say:

OK, granted. Your contention that there is a 1:2 ratio of expected winnings is certainly true if you’re talking about payoffs based on the same coin flips, which is what (I think) you’re talking about. However, the OP clarified,

(The emphasis is tony’s.) So most of the rest of us are talking about independent games, which, in my opinion, is more interesting to discuss anyway.

And, speaking of tony, he also says (to me),

Ummm… I think I see the disconnect. You’re saying, “the expected winnings from game one is infinity; the expected winnings from game two is infinity; the ratio of infinity to infinity is undefined, and thus ratio of the expected winnings is undefined, and I can accept that. However, since any two specific games, once played, clearly have a winnings ratio, it is not clear that the expected ratio does not converge.”

Is this a fair paraphrase of your argument? If so, I’d argue that the ratio does not converge, because any one individual game has the potential to generate an astronomical payoff (which is why the expected winnings are infinite). This astronomical payoff would result in a huge ratio, which would then severely skew the average ratio.

However, although I think you can’t say anything about the average ratio, you could say something about the distribution of ratios. For example, by symmetry, the median expected ratio (meaning, over time, an equal number of the ratios are above and below this number) is 2:1. With some work, you could find the range within which, say, 99% or 99.9999% of the ratios fall. It’s just the outliers that screw up the average.

You also want me to…

Nope; undefined. You’re confusing a really big number (in this case, (3^100n!)!) with infinity. In your original problem, the ratio is undefined because infinity over infinity is undefined; having a payoff of (3^100n!)! doesn’t make it a “bigger” infinity. However, the median ratio will certainly be much smaller in this second case (not zero, but something pretty darn small.), as will the 99.999 percentile ratio and so forth.

Okay, so I’m new to this board, but I’ve read almost every post so far and the discussion seems to have gotten caught up in series, sums, etc. and moved a bit off of the probability angle. So here’s an explanation, without using sums (except in the background of the theory.)
:

(In this, a game represents a specific method of play, i.e. flip until heads comes up. A match will be a series of games. )
It should be clear that from a probability standpoint, the two games are exactly the same (We’re not talking about payoff yet.) You flip until you get heads.

The expectation (defined, at any rate) is the mean value of the probability distribution. (1) See below for more. It represents essentially what happens in an infinite match of games, and is useful to study, among other things, likely payoff. In this case, the probablity distribution is well-known and is called the Geometric Distribution, defined as :

Waiting time T until first success of an event in Bernoulli trials (2) with probability p.

The expected value or mean of this distribution is 1/p . In other words, for this particular game, you’ll have an expectation of 2 turns until you win. This isn’t particularly important, but I thought you might like to know that this much of the game is a known distribution. This is just to explain that much, and to make it clear that the two games, if played infinitely without a particular payoff, will have to end up exactly the same.

Now to go to the payoff. It makes absolutely no difference to the finaly payoff if you pay off one game at a time, or just wait until the match is over, and figure the payoff. So for both of these, if you play a match of finite length, you’ll get some set of numbers at the end representing winning n, and pay off. If you used the same match numbers, you’d of course get twice the payoff using Game 2. But if the numbers are different (different matches) you can’t expect to get the same payoff, since you’ll have a different set of numbers.

Imagine a giant board (in fact it’s infinite) that has every single integer on it. Let’s say you put a marker on the board every time you play a game and reach that number. So after 10,000 games, you’d have a lot of markers near the bottom, and a few out over the other numbers. And if you have Board 1 and I have Board 2, they won’t necessarily look the same.
Now let’s keep playing (forever, in fact). It should be clear that as this happens, my board and your board will start to look pretty similar; in fact, at inifinite time, they must be equal. (3). All the spaces will be covered by markers, and the ones that are more covered will be the same. (If you aren’t seeing this, then review your concept of infinity and probability distributions). So if we pay off at infinite time (essentially what the OP was asking, since paying off after time and paying off as time passes are the same), we see that Board 2 will have twice the payoff of Board 1. Please follow up with any questions, but I have to admit I’m not that great at explaining this further.

panama jack

(1) The probability distribution is the way some random variable X looks in some range. It can be discrete or continuous, and has a value equal to the probability that X= some particular x in the range given. Ranges of infinite value are often considered. The normal distibution, or bell curve, is the most well-known probability distribution (for good reason, too, since every infinite set of distributions approaches it).

(2) This is mostly trivial, but Bernoulli trials are independent, and it must satisfy p+q=1 where q represents (not p occurs).

(3) If not, we can throw probability theory mostly out the window. The meaning of that statement, though, is that a probability distribution should work any time you use it, and represents essentially an infinite amount of events. There are in fact some who argue that we have to be careful about this, but it’s more philosophical. As an example of the problem, consider a weatherman’s prediction that there’s a 20% chance of rain tomorrow. Does this really have a concrete distribution. After all, there’s only one day that will ever be June 4, 2048 and so it’s difficult to say that this 20% represents a long term or infinite number of trials of June 4, 2048.

A few more notes :

Let’s look at the boards again. Imagine that instead of putting a marker on the number, we put a payoff on the number (like a casino game). It’s clear that the larger numbers will have huge sums stacked on them even after one win on them, while the low numbers will rack it up slowly. Still, one win at the large number will skew the running total by quite a lot. This is why computer simulation likely won’t work, and partly why some people think it doesn’t converge. The thing is, it does, but only at infinity. (It’s not really like a lot of things that do become bounded as time goes on.)