Challenging Probability Question

We’re talking about expected values and how, on average, the payoffs will be “close” to the expected values. Here, however, “close” is not good enough–we need exactness for this limit to exist, not just a likelihood of closeness, because of the huge payoffs.

No, I will continue to say infinity over infinity is undefined, as is 0/0, or any other “indeterminate form”. Calculus does not define infinity over infinity, calculus develops the notion that the ratio of two functions can have a limiting value as x goes to infinity. At no time is infinity divided by infinity.

For example, lim (x^2 + 1)/x^2 = 1 as x goes to infinity. Does that mean (inf^2 + 1)/inf^2 = 1? Of course not. It merely means that as x gets arbitrarily large, (x^2 + 1)/x^2 gets arbitrarily close to 1. No arithmetic using infinity is involved, it is still left undefined.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

Cabbage and Chronos,

I’ve been following the discussion and finding it very edifying. I don’t have a lot to add particularly, although I am glad that it seems to generally be returning to the probability issues initially raised by my question, rather than reamining stuck on calculus issues in the abstract. i hope your positions on these matters eventually “converges.”

Cabbagge, a few posts ago offered a narrative argument for why there would be no convergence. I realize that you are not offering it as a proof, but to my mind, it is no more than suggestive. Yes, eventually really huge sums will come up, but that will only result in the respective total pots being increased enormounsly so that next time you need a really huge payoff to knock the convergence off. I don’t know whether you can expect these really huge sums to come up frequently enough to do this forever. Intuitively, it’s not obvious to me.

The expected payoff of each game for a single play is infinite. Can you prove from this fact alone that the long-term ratios are undefined? It seems a tempting inference because the ratio of the expected payoffs is undefined and the long-run is just the manifestation of the expectations of the short-run (if you know what I mean). Is it really so straightforward? Stated conversely, can you prove that there are no games with infinite expected utilities for a play which will a have average winning ratio that will converge?

tony1234

Wow, tony, I didn’t even realize I’d posted in this thread yet! :slight_smile: I do have something to add, though, that will hopefully throw a monkey wrench into a lot of arguments (including, perhaps, my own)-- Hey, we’ve got to keep this interesting, after all. First, some hypotheticals: First off, suppose there’s two tables in the casino, and at both of them, game 1 is being played (payoff is 3^n dollars) Which table is preferable? Clearly, since it’s the exact same game at both tables, it doesn’t matter.
Second hypothetical: Suppose we have one table, but with two players, player 1 and player 2. The same coin is flipped for both; payoff is 3^n for player 1, and 23^n for player 2. In other words, no matter what the payoff for a given play, player 2 ALWAYS wins twice as much as player 1. In this case, if you had the choice of being player 1 or player 2, which would you choose? I think that it’s pretty obvious that player 2 would be the better choice, but I also rather suspect that this is the weak point in my argument.
OK, hypothetical number three: two tables, for each table there’s one coin and two players, player 1 and 2. Same setup at each table as in previous example, player 2 wins 2
3^n, player 1 only wins 3^n. Now, suppose you walk into the casino, and there’s already a person registered as player 1 at the first table, and someone registered as player 2 at the second table. Again, you have a choice of playing as player 1 or player 2. If my reasoning is correct (no guarantees there), then you should choose to play at the table where you get to be player 2. But this is just like the original question, choosing from two independent possible games. Can anyone show me if/where the error in my reasoning is?

On a completely different note, as to how much a person should be willing to pay to play this game: I would not be willing to pay any more than $3, the minimum possible payout, for the simple reason that if someone’s offering me a game with a theoretically infinite payout, it’s got to be rigged :slight_smile:


“There are only two things that are infinite: The Universe, and human stupidity-- and I’m not sure about the Universe”
–A. Einstein

Whoops, Chronos, I’m getting our threads tangled! I meant to write “Cabbage and Konrad” in my earlier post. Sorry.

Now let me think about your hypos.

Tony

p.s. In the finite-universe thread I meant “isotropic,” not “isomorphic” obviously. Also, given your handle, you might want to take a look at my post on the metric-time thread.


Two things fill my mind with ever-increasing wonder and awe: the starry skies above me and the moral law within me. – Kant

OK, I guess I’ll try one more time. First, I claim that, over repeated playings of either game, there will be no limit to the max payoff on a given game. There will be a game that pays off 3^n (or 2*3^n, for Game 2) regardless if n is 100, 1000, googolplex, whatever. The probability of this happening is 100%. It can be proven, but I don’t want to make this overly long, but, basically, remember: We are playing an endless streak of games here, we ain’t gonna stop playing. Ever.

We would expect that, out of 2^n flips, there will be exactly one payoff of 3^n dollars in Game 1. (2*3^n in Game 2, of course). It’s likely that this will happen in one game before another–I don’t think this even requires proof. I’m claiming that, whichever game gets that payoff first, it’s going to screw up the ratio so that it will never be able to settle down to 1/2. That’s going to happen for n=1,2,3,4,…; in other words, it’s always going happen, it will never stop screwing up the ratio.

Here’s why it will screw up the ratio:

Consider the first 2^n flips. As I said, we would expect exactly one of those to have a 3^n dollar winner. The later that win happens, the less likely it is to screw up the ratio. (If it happens early, the ratio will really be lopsided in favor of one or the other. Later on, the total winnings of each game will be large, making a change in one of the total winnings not as significant to the ratio).

Now that that’s established, let’s assume that the 3^n win happens later–say exactly on the (2^n) + 1 flip for one of the games (but not both, since, as I’ve said, it’s unlikely the two games would match up that well). If it screws the ratio up here, it certainly would have screwed it up earlier.

Through the 2^n flips (with no 3^n winner or greater winner yet), what is the total expected winning for each game? Half the games are expected to win on the first flip, a quarter on the second, and so on. The expected winnings are:

(SUM i=1 to n-1) [2^(n-i)]*(3^i)

= 2*(3^n) - (2^n)*3

Of course, the corresponding expected winnings for game 2 will be exactly twice that.

OK, now for the big question: Somewhere along those 2^n flips, we will expect a 3^n (or greater) winner. How much will that knock this ratio off of 1/2?

**If game 1 wins the big one first, the ratio will be:

(1/4) * [3^n - 2^n]/[3^(n-1) - 2^(n-1)]

which has limit 3/4 as n goes to infinity. (NOT 1/2).

If game 2 wins it first, the ratio will be:

[23^(n-1) - 2^n]/[63^(n-1) - 2^(n+1)]

which has limit limit 1/3 as n goes to infinity. (NOT 1/2)**

So if Game 1 is the first to win 3^n (win the game on the nth flip), it screws up the ratio. If Game 2 is the first to win 2 * 3^n (win the game on the nth flip), it screws up the ratio. This will happen over and over again, for each value of n. It won’t converge. Each time a new big win comes (and it will come, 'cause we never stop playing the game), it screws up the ratio–screws up the convergence.

I don’t think I’m capable of making it any clearer than that.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

Chronos:

I don’t see any error in what you said. Yeah, I think you should play game 2–in a finite series of games, you would “expect” to win more in Game 2 than in Game 1. (I’m using “expect” somewhat loosely here–not the strict probability definition).

All I’m claiming is that the ratio of winnings of Game 1 to Game 2 will not converge to 1/2. The ratio will jump around.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

Ok, Chronos, I’ve thought about your hypos. Your presentation of the hypos seems admirably clear. Nevertheless, I don’t see quite what you are driving at–that is, why there’s something problematic. Regarding Hypo 1, I agree that you should be indifference between the tables. Regarding Hypo 2, I feel pretty firmly you should prefer the game with the 2*3^n payoff (where there a dependence between the games). Rearding Hypo 3, you should prefer to be at the table where you are Player 2. One way of arriving at this conclusion is that you are indifferent between being Player 1 at either table and you prefer to be Player 2 at a given table to Player 1 at that table. By transitivy of preferes, . . . . So I think that all of these results are intuitively consistent. perhaps you can clarifiy, where’s the rub?

But this is the interesting point. If all of this is correct, it seems you have a very easy proof that Game 2 in my original question must be prefered to Game 1, even if the ratio of there average winnings does not converge. The point of this thread for me, in a sense, has been to trying to get a proof via probabilty theory (in quantitative terms) of a result–Game 2 is preferable-- that seems obvious via that basic ulitity theory argument you have advanced.

Finally, if anybody is interested, I am currently trying to write a paper (hopefully for publication) on a series of hypos not terribly differ from the one Chronos mentions. My hypos are slightly more complicated and present very paradoxical results. I’d be delighted to discuss my specific hypos and theories with anyone, but think that it would be only appropriate to do so off-list. At a minimum you would have to read a 10-page draft paper of mine.

But its nice to know others are thinking about things similarly.

Tony


Two things fill my mind with ever-increasing wonder and awe: the starry skies above me and the moral law within me. – Kant

A couple of corrections I just noticed I need to make on my long post above. In many places I accidentally used the word “flips” when I should have used the word “games”.

Also, when I was figuring up the total expected winnings through the first 2^n games (not flips), assuming there had been no 3^n winner or greater, I under counted the total winnings by a little (basically, only by a couple of games). That doesn’t make enough of a difference to change my original argument, though.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

Konrad, you are simply WRONG on many counts.

You say: << The ratio of SUM(1)/SUM(2) from 0 to n is 1/2 no matter how you rewrite it. >>

This is just plain false (assuming the notation SUM is used the way I’ve been using it, to mean the infinite sum.) Would you say that SUM(1)/SUM(4) = 1/4? But note that SUM(4) = SUM(2), by simple re-parenthesis SUM(2) = 2 + 2 + 2 … = (2+2) + (2+2) + (2+2) + …

IF the series under consideration were convergent, then you can rearrange, factor, etc and you won’t change the values. But when an infinite series is DIVERGENT, simple arithmetic (like SUM(2) = SUM (21) = 2SUM(1)) just doesn’t work.

You need to step back and rethink. You’ve got some misconceptions about how limits and infinite series work.

You are also wrong to say that 0/0 is defined. 0/0 is NOT defined. Period.

NOTE: f(x)/g(x) can be defined as f(x) and g(x) both go to zero; but that’s a VERY different animal from 0/0. A function approaching a limit is NOT the same as a function being equal to the limit point. Therein lies the secret of much of calculus, which I taught lo these many years ago.

And actually, even if an infinite series does converge, in some cases you still have to be careful with rearrangements. For example, the series

1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 +…

can be made to converge to any real number by an appropriate rearrangement of the terms.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

Cabbage,

I’m working through your reasoning. I certainly appreciate you efforts to explain this, but I need a bit a clarification.

You write:

"Through the 2^n flips (with no 3^n winner or greater winner yet), what is the total expected winning for each game? Half the games are expected to win on the first flip, a quarter on the second, and so on. The expected winnings are:

(SUM i=1 to n-1) [2^(n-i)]*(3^i)

= 2*(3^n) - (2^n)*3"

This cannot be literally correct. The expected winnings for every series of throws is not a number, but a probability distribution. (Well, if it were a number the number would be “infinity” in the sense that we say “the expected winnings from a single throw should be valued at infinitey, i.e., an amount higher than any finite number.”) That is, there would be a probabilty associated with every possible total of your winnings, which are infinite in number. Do you mean to say that 2*(3^n) - (2^n)3 represents a sum that you would have .5 confidence your total winnings would not exceed (so those winnings or less are “expected”) or that 2(3^n) - (2^n)*3 is the most likely total winnings? Neither seems to work. If n=3, then we are talking about the expected winnings over 8 games. When n=3, the expression equals 54-24, or 30. But that is much too low on either interpretation.

I certainly follow the general approach of your argument, I just am having some trouble on the details.

Tony


Two things fill my mind with ever-increasing wonder and awe: the starry skies above me and the moral law within me. – Kant

It’s the same kind of thing as before when I was talking about the expected maximum number of flips. I was doing the expected value of total winnings after 2^n games, assuming that no game had won 3^n up to that point. The assumption allows us to go work with finite values instead of infinite, just like I did before. And the argument that we can actually make this assumption was that, if a game had won 3^n before that point, it would have REALLY screwed up the ratios–the ratios get harder to change the more total winnings you have. If you can show that a win of 3^n will significantly change the ratio after 2^n games have occured and the winnings have built up, certainly it would have changed the ratios if it happened earlier. (In other words, if you compare (1+x)/2 to (10000+x)/20000, if x is large enough to change the latter “significantly” from 1/2, it will definitely change the former ratio a lot from 1/2.

So, for example, the case when n=3 (played 8 games), and none of the games have won 3^3 = 27 dollars or more. We would expect, then, that half of the games have won on the first flip, 43 = 12 dollars. Half of what’s left would win on the second flip, 29 = 18. Here’s where I mentioned I forgot to add in a couple of games, let’s go ahead and assume that they won 9 each, as well, for a total winning of 12+18+18=48 (making the winnings as large as possible, within a certain range of probability, so that the ratio will be hard to change). The the corresponding expected total winnings of the other game would be twice that, or 96, keeping the ratio at 1/2 (The actual ratio won’t necessarily be 1/2, but since one of the main questions seems to be whether or not the ratio converges to 1/2, let’s try and keep it as close as we can and see what happens.

But it won’t stay at 1/2. For any given game, there’s a 1/4 chance you’ll win $27 dollars or more. Through playing 8 games, that would be expected to happen at least once (twice, actually). But if the ratio is around 48/96, whichever game wins $27 (or more) is going to significantly screw up the ratio. Suppose on the 9th game that one game wins 27, the other 9 (or 18, depending on which game). (48+27)/(96+18) = .66, (48+9)/(96+27) = .46, both far from .5. This isn’t unique to n=3, each time one game or another has a big payout of 3^n or 2*3^n for the first time, for either game, that win will generally be big enough to change the ratio significantly. Enough to keep it from converging.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

CK sez:

En Garde! CKDextHavn, you are WRONG on many counts! There, I countered your wild assertion with one of my own.

Haven’t I already disproved this twice in this thread? Wasn’t disproving this silly idea a question on the Descartes math competition? (A question which I got right) And, no, sum from 0 to n does not mean infinite sum. It means sum from 0 to n. (How can you say an infinite sum from 0 to n???) If you want an infinite sum you have to take the limit as n goes to infinity. Either way, as I said many times already, rewriting 4 as 2 + 2 would simply give you the limit as n goes to infinity of SUM(2) from 0 to to 2n. That diverges just as quickly as SUM(4) from 0 to n.

And you found this information where exactly? You’re telling me that 2(SUM(1)) from 0 to n doesn’t equal SUM(2) from 0 to n for any n?

Let’s try not to be patronizing here. Especially when one of us obviously doesn’t know what they’re talking about.

Here is what I said (with some added asterixes):

Would you at least do me the honour, wise and sumpremely correct being that you are, of reading my posts before you respond to them? Is it really so much to ask that you at least make incorrect assertions about what I have written rather than doing so about what I have not?

Can you just tell me one thing? Do you agree or disagree with the statement that SUM(A)/SUM(A) both from 0 to n = 1. If you agree with this statement then tell me if you agree or disagree with the statement that the limit of that equals 1 as n goes to infinity.

Cabbage: I think the function you are talking about (1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 +…) can be made to sum to any number by rearanging the terms FOR A FINITE n but it will always diverge in limit as n goes to infinity. That’s a big difference.

Woops, I missed a clever yet easy jab.

CK sez:

Yes, you obviously taught it before these 2 chaps named Newton and Leibniz made many changes to it. You should brush up on the latest publications by Euler and Riemann, there’s been a lot of changes since you first learned it.

Gotta go tutor my friend for his multivariable calculus exam now…

Wrong. Review “alternating series” and “conditional convergence” (as opposed to “absolute convergence”).

Gotta run right now though, I’ll try and post more later.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

konrad says:

cabbage responds:

cabbage says later:

Aren’t you committing the same error you nail
Konrad for? Isn’t the 2nd ratio the same
indeterminate form as the 1st? Then how
do you evaluate the limit?

Re the infinite series
S = (1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 +…).

Using methods of integration, you can show
S = ln2 (natural log of 2).

<< Do you agree or disagree with the statement that SUM(A)/SUM(A) both from 0 to n = 1. >>

I disagree with the statement if SUM(A) is divergent, but I agree with the statement if SUM(A) is convergent.

<< If you agree with this statement then tell me if you agree or disagree with the statement that the limit of that equals 1 as n goes to infinity.>>

Again, if the SUM(A) is convergent, then I agree that the limit SUM(A)/SUM(A) is 1. If SUM(A) is divergent, then I disagree that the limit is 1. That’s the whole point of this argument, that SUM(1)/SUM(2) is not 1/2.

My point about rearranging the terms is that SUM(1) and SUM(2) and SUM(3) and SUM(4) all diverge. Yes, they all diverge to a countable infinity (and the same countable infinity, if you like), but that’s not the point. When dealing with the quotient of such sums, there is no reason to take SUM(1)/SUM(2) to be any different from SUM(1)/SUM(4) or SUM(1)/SUM(1). The ratio of countable infinity to countable infinity is not defined.

But SUM(A) from 0 to n has to be convergent since we’re not talking about limits here. n must be finite. So basically you’re saying you agree.

Now if SUM(A)/SUM(B) = 1/2 for all finite n do you agree with the statement that if we take the limit as n goes to infinity it should also go to 1/2.

If not, why not? If it is true for any n it must be true when you take a limit. If you plot the ratio versus n it’s a straight line.

jcgmoi:

No, you can find the limit with a little algebra. Try dividing both top and bottom by 3^n, it makes it a little easier to see the limit.

And, if you rearrange the terms of the series, you can make it converge to any other number you like: pi, e, -4365.45, whatever. Order is important.

Konrad:

I would agree that

lim [(SUM from i=1 to n) (3/2)^i]/[(SUM from i=1 to n) 2*(3/2)^i]

as n goes to infinity is 1/2. We’re taking the ratio of the nth partial sum each time (which is 1/2 always), so the limit is 1/2, of course.

This is not the same, however, as (and I’m hoping this comes out readable):

(lim as m goes to infinity) [(SUM from i=1 to m) (3/2)^i]

(lim as n goes to infinity) [(SUM from i=1 to n) 2*(3/2)^i]

which doesn’t exist. For example, as I’ve said before, take the ratio of the jth partial sum in the top, and the (j+1)st partial sum in the bottom. This isn’t 1/2, and will not converge to 1/2. m and n are independent of each other, the “limit”, if you want to call it such, will be different things, depending on what rates m and n increase, so there’s actually no limit at all.


…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)

If I follow this, I think the point is that while:

Lim a f(a)/Lim a f’(a) = Lim a [f(a)/f’(a)]

is true,

Lim a f(a)/Lim b f(b) = Lim a [f(a)/f(b)]

is false. Indeed, the left side of the equality is a number, the right a function.

Hope me notation is clear. I haven’t done any calculus for years . . . .

Tony


Two things fill my mind with ever-increasing wonder and awe: the starry skies above me and the moral law within me. – Kant