:: Test Post. Please ignore this post. If there are multiposts above, please ignore them too.::
Actually, what I’m trying to say is that this is not true (and the derivatives have nothing to do with what I’m trying to say, I haven’t used any derivatives. You can replace the f’s with fs and it still won’t be true).
On the left hand side, you may as well replace the a’s on the bottom with b’s, actually; it will mean the same thing–a is just a “dummy” variable in each limit, anyway; the a on top has nothing to do with the a on the bottom. Writing it this way makes it clearer that there’s no kind of “dependence” going on between the two.
…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)
CKD appears a bit lonesome in his staunch defence of truth, justice and the American way, so let me try to provide support.
Consider two infinite sequences A(n) and B(n). If they both converge to finite, non-zero limits, then we all agree that
limit A(n)/limit B(n) = limit{A(n)/B(n)}
(By the way, “Limit A(n)” is defined as a number that A(n) gets closer to and stays closer to than any preset small distance for all n larger than some value.)
If both sequences diverge to infinity, or both converge to zero, then there is simply no meaning to the expression
limit A(n)/limit B(n)
We can write the symbol, but it does not represent anything. It’s undefined.
Take the example
A(n) = 1,2,4,8,16,…
B(n) = 2,4,8,16,…
B(n)/A(n) = 2 for all n. So, you might think that
limit B(n)/limit A(n) = 2
But, B(n) and A(n) are the same sequence, except for the first term. So you could also argue that
limit B(n) = limit A(n), or
limit B(n)/limit A(n) = 1
This appears to be a paradox, which helps illustrate why there is no natural way to define Infinity/Infinity.
The actual problem is even more interesting in that involves a series of random numbers. I appreciate being introduced to it.
Perhaps I should change my UserName to “Chopped Liver”.
…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)
I didn’t think I was alone, when I had Truth on my side, but thanks, December, for the support. Actuaries know these things.
I have reread some of the posts, and I think December has hit the nail on the head: the problem is indeed that the quotient of the limits is not necessarily the limit of the quotients.
In this case, we are dealing with the Expected Values of the games, which are determined by an infinite sum SUM(An) and SUM(Bn) where n goes to infinity.
Konrad, you have apparently been dealing with finite sums, and I have no problem with what you say regarding finite sums… or even absolutely convergent sums.
But in this case, the expected winnings from each Game an INFINITE (and divergent) sum. OK, “converges to infinity” if you want, but the point is that the sum is unbounded… that is, the partial sums can be made as large as one would like, by choosing enough terms.
Now, it is true that the infinite sum is the limit of the partial sums from 0 to n as n --> infinity. OK, let’s call SAn the partial sum of the A1 + A2 +…+An. Then the infinite sum SUM(An) as n --> infinity is lim(SAn) as n–>infinity.
Similarly for SBn as the partial sum from 0 to n of B1 + B2 + … + Bn.
In the situation we are dealing with, the quotient of all the partial sums SAn/SBn are all 1/2 (the question of whether we are dealing with SAn/SBm, I will let slide.)
But, as December rightly notes, it is NOT the case that lim(SAn)/lim(SBn) = lim(SAn/SBn) as n–> infinity. It can be true, but it is not necessarily true, if the limit is unbounded (or if the limit of the denominator is zero, for instance.) (Damn, I wish I could do math notation here.) Hence, stuff like the limit test, ratio test, L’Hopital’s Rule, etc.
I stand where I’ve stood: the expected outcomes are infinite (unbounded) and the ratio of the expected outcomes is also (in this case) indeterminate/infinite/undefined.
One last comment: Konrad, I see no reason for the gratuitous insults. I said you were wrong, and wrong you have been, in several matters (including thinking that SUM(An)/SUM(Bn) = SUM(An/Bn).) I did not insult you personally, and I see no reason for you to insult me. You want to do that, I’ll meet you in the Pit.
Cabbage sez:
OK, now you have to think what you’re doing here for a moment. You can’t just blindly divide 2 sums for no reason. You gotta ask yourself what is this ratio that I’m finding? I can say 5/7 = 10/14 but that has nothing to do with the problem.
Now what we are trying to find out is the ratio of expected payoffs for the games. Yes, the games ARE independent of each other when you play them. BUT, what is the ratio of their payoffs? The expected payoff for a game is SUM(probability of getting to n flips)(payoff for n flips). If we were playing 2 games and trying to find out the ratio that way you would be right to say that we couldn’t do it because one series could go faster than the other. What you are saying by writing that limit the way you have is that you have two independent games. That’s not what we’re trying to find here. We’re trying to find the ratios of average payoff per play. When you try to find that you have to compare each payoff for a game of n flips.
This way the two series are not independent because it’s not the ratios for two games but the ratio of the payout for each game if it goes to n flips.
If I’m saying I’m summing the ratios of the expected payoffs then right away I can write the sum as SUM(2^n/2^n)(2*(3^n)/(3^n)). I don’t even have to start out with two series because it’s just a simple formula for finding the expected payoff for one play of any game.
So basically what I’m trying to say is you have to ask yourself why we’re trying to find the ratio SUM(A)/SUM(B). Since we are just finding the ratios of expected payoff for n flips we KNOW they are not independent. No matter how you rewrite it will be the same.
You want some more evidence? What if the game was played with a 4 sided die (where you have to keep landing on the same side to keep playing) instead of a coin? It shouldn’t change the ratios of the payoff should it? But there I could prove easily (SUM(x^n) = 1/1-x) that you’d get a 2:1 ratio. The ratio of 2(3^n)/3^n is still divergent on top and bottom yet we still get an answer that isn’t. Or if you want to look at it another you could say the player has to flip the coin twice each turn and have it land heads both times. (That is n is now half the number of flips) How would something like this affect the game? Logically, it shouldn’t affect the ratios. Yet, (1/4^n)(3^n) is convergent so the ratio will give 2:1. If you assume that the original game has an undefined expected payoff then you’re saying that by making the player flip the coin twice instead of once per turn suddenly the game changes and there is a defined ratio. But really haven’t changed the game so that doesn’t make sense. It gets harder to get to a big payoff but the ratio of the payoffs for the two games hasn’t changed. This is a logical inconsistency that proves the original problem must have a defined ratio. It’s like saying that in the original problem it wouldn’t matter which table you’d go to in a casino but if the suddenly the made both games harder then it would matter which table you went to.
CK sez:
But it is true in certain cases. For example the limit of y=x over y=2x as x goes to infinity. These are two straight lines that go to infinity just as our sums do yet their limit is 1/2. This is the exact same thing as the two series except it’s continous.
I’ve shown that in the limit test two divergent series can have a ratio. Logically, if it’s possible and these 2 series don’t have a ratio then there must be some reason why. You still haven’t shown me why they don’t. It’s not enough to say they can’t because they’re divergent! According to the limit test you CAN have divergent series with a ratio. If nothing else I would like you to answer this question for me.
You were being patronizing, and that’s insulting.
Apart for the SUM(A)/SUM(B) = SUM (A/B), which I corrected myself and which wasn’t even necessery to my proof, what have I said that was wrong? Where are these “several matters”? Why do you see it so important to latch on to something so trivial as that little mistake which had nothing to do with my proof? Is it because you need to prove to yourself that you are correct? Because you can’t find any holes in my reasoning apart from saying “No, you can’t do that because it’s divergent.” Do you have any proof that you can’t use normal alegbra in those cases? You’re just saying “No, you can’t”. You can’t write 2SUM(A) = SUM(2A)??? Says who?
Ha! You thin-skinned weenie! You call those gratuitous insults? Come to the pit and I’ll give you such a mocking that the very earth will tremble! When I’m through with you statisticians will scare their children by telling them that if they don’t behave CKDextHavn will teach them mathematics.
Aha! Enlightment comes.
Konrad persists: << The expected payoff for a game is SUM(probability of getting to n flips)(payoff for n flips). >>
Well… sort of. The expected payoff of Game (1) is the INFINITE SUM (probability of getting to n flips) x (payoff for n flips), as n goes from 1 to infinity. And that value is unbounded.
Ditto for Game (2).
<<Since we are just finding the ratios of expected payoff for n flips we KNOW they are not independent. >>
Here’s where the enlightenment falls. Konrad and I are talking about different games. So, I say: NO. We are NOT just finding the ratio of expected payoff for n flips. That’s a different question: If you flip the coin n times, what’s the ratio of the likely payoff from Game(1) to the likely payoff from Game(2)?
That’s the easy question, because if there are only n flips (n being finite), there is a finite sum in the numerator, and a finite sum in the denominator. That’s a different game.
The game being asked about is, you flip a coin repeatedly UNTIL IT COMES UP HEADS, however many flips that may take. The Game Konrad is talking about is, you flip a coin n times where n is fixed.
Before you go throwing your insults around or taking offense because of the way you read my tone, Konrad, you might ask whether there is some base different understanding of what’s going on.
I stand where I stood: in the game as stated, the expected value of numerator and denominator are unbounded and the ratio is not convergent.
Can one have infinite sums where the ratio is convergent? Sure. But this example ain’t one of them.
I am talking about what you originally thought I was. SUM(probability of getting to n flips)(payoff for n flips) from 0 to n as n goes to infinity is the expected payoff per game.
I’m starting to wonder if we’re talking about the same thing, too:
Is this the source of the confusion? I can think of two scenarios:
-
Two people are at the same table, flipping the same coin, for game after game. One player gets the Game 1 payoffs, the other gets the Game 2 payoffs; however, winnings for both are determined by the same flips of the coin.
-
Two people are at different tables, flipping different coins, for game after game. One player is playing Game 1, the other, Game 2. Each plays an equal number of games, but with diferent coins, resulting in a different number of flips for corresponding games.
Konrad, I know we both agree on 1. At the end of each game, player 2 gets exactly twice as much as player 1. The ratio of winnings will always be 1/2, because their games have the same outcome (same number of flips for both players for each game) each time. The ratio is constant, so it will converge to 1/2, of course.
Do we agree on 2? Corresponding games will be different for each player. The first game, one might win $3, the other might win $54, and so on.
I agree that in 1. the ratio converges to 1/2. I’ve been arguing that in 2., the ratios will not be expected to converge, they will jump around, as the different players get succesively larger payoffs from time to time.
…ebius sig. This is a moebius sig. This is a mo…
(sig line courtesy of WallyM7)
Just for the record, at about post 25 in this thread, I stated:
Tony
Two things fill my mind with ever-increasing wonder and awe: the starry skies above me and the moral law within me. – Kant
Thanks, Cabbage, I think you were more succinct than I in describing the two different games. And yes, Tony, I’ve been responding to what you were asking.
There is no question about the first interpretation. On any play of the game, player of Game 1 gets half the amount of player in Game 2, since the same toss governs both games.
So, now we await Konrad’s response.
The ratio of the payout of these two games is, as was correctly pointed out before, undefined, specifically because the payout is an infinite, unbounded sum. I think everyone is in agreement that the expected payoffs for the two games are:
Payoff[1] = SUM[n=1 to infinity][(3/2)^n]
Payoff[2] = SUM[n=1 to infinity][2*(3/2)^n]
which, of course, assumes that there are NO time or money limits; i.e., it is technically possible that a game could last for one flip, or ten flips, or a googol flips.
Now, consider the following:
P[2]/P[1] = S[1-inf][2*(3/2)^n]/S[1-inf][(3/2)^n]
where I’ve used some notational abbreviation that is hopefully obvious. I think everyone agrees with this statement. The problem comes in when you do this:
P[2]/P[1] = S[1-inf][2*(3/2)^n]/S[1-inf][(3/2)^n] = 2*S[1-inf][(3/2)^n]/S[1-inf][(3/2)^n] = 2
However, I can also say (pay attention here!) that
P[2] = S[1-inf][2*(3/2)^n] = S[1-inf][(3^n/(2^(n-1))] = S[1-inf][3*(3/2)^(n-1)] = S[0-inf][3*(3/2)^n] = 3 + S[1-inf][3*(3/2)^n]
so now
P[2]/P[1] = (3+S[1-inf][3*(3/2)^n])/S[1-inf][(3/2)^n] = 3/S[1-inf][(3/2)^n] + 3*S[1-inf][(3/2)^n]/S[1-inf][(3/2)^n] = 0+3
So, since 2=P[2]/P[1]=3, then 2=3. But 2 does not equal 3, so there must be something wrong here. The fallacy is the assumption that ratios of infinite, diverging sums are defined. QED.
If you disagree with this, I can only think of two reasons why:
- You don’t believe that the payoff for the game is really an infinite sum. ANS: If there is a real limitation on time or money, of any sort (like, you have to complete the game before the universe collapses at the end of time), then we’re talking about a finite sum, and the ratio is defined, and the argument above is false. But that is not the game I’m talking about, and I don’t think it’s the game the OP was talking about, either.
- You think there is some trickery in the mathematical manipulation, or a “loss” of terms. ANS: Nope, everything is legit for an infinite sum. If you don’t believe me (and I suppose there is no reason why you should, other than the stellar clarity of my arguments), look up the following: Bryan Bunch, 1982 “Mathematical Fallacies and Paradoxes”, Dover, ISBN 0-486-29664-4. This contains a pretty complete discussion of the fallacy of comparing infinite sums, and also a short discussion of the Petersburg paradox (which is what this thread is about).
Zut,
Welcome to the conversation.
Quick question from a person who took only two courses in college math 22 years.
We all agree that if Game 1 and Game 2 were dependent, i.e., their payoffs depended upon the results of the same coin-flipping event, the ratio of their payoffs would be 1/2. Is your proof inconsistent with that fact? In other words, where in your proof is the assumption that the games are not dependent?
In any case, I look forward to studying your proof more carefully when I have some more time, as well as review the article you recommend.
Finally, do you know how to quanitify (mathematically express) the long-run advantage in aggregate expected payoffs of Game 2 over Game 1. If there will be no fixed ratio approached, what can be said with precision about their expected comparative payoffs? (I realize that Cabbage early opined on this question in general terms.) For example, is there an upper bound of their expected ratios as the number of plays increases?
Tony
Two things fill my mind with ever-increasing wonder and awe: the starry skies above me and the moral law within me. – Kant
Tony: Zut’s contradiction starts with:
P[2]/P[1] = S[1-inf][2*(3/2)^n]/S[1-inf][(3/2)^n]
In the notation I was using, that’s = SUM(An)/SUM(Bn)
This assumes that P[2] and P[1] are independent. If they are not independent (both are based on the same flips), then:
P[2]/P[1] = SUM1-inf/((3/2)^n)
That’s SUM(An/Bn)… a different animal.
The whole paradox in this game is that the mathematical “expected value” is counter to common sense. Common sense says that you will never get a string of a hundred tails in a row before you get the first heads.
Mathematics says, that’s not quite right. If you flip the coin once a second for a few billion years, even waiting a second between tries after you got the first heads, you’ll eventually get that string of 100 tails.
Common sense says, a few billion years is pretty much “never.”
There are techniques to map the likelihood of outcomes, and thus to come up with a probability range for distributions. In computer simulations, if you’re going to play the game 250,000 times for instance, your longest string of tails will likely be around 16 or 17.
[Note: This message has been edited by CKDextHavn]
tony,
The proof I posed is strictly for independent games; for dependent games, it breaks down because the initial suppositions are no longer valid. For a dependent game, you are saying “Flip the coin, pay player one according to a certain set of rules, and then pay player two twice as much,” which equates to:
Payoff[1] = SUM[n=1 to infinity][(3/2)^n]
Payoff[2] = 2*Payoff[1]
which, of course, is a different problem than the one that (I think) you originally posed.
A rather poor analogy (sorry, the best I can do) is this: You and I both roll one die each (independently). If the die comes up a 6, the rolling player receives one dollar. Now, if we play n times, what is the probability we each walk away with the same amount of money? Intuitively, this probability decreases with increasing n, and is always less than 100%. OK, what about if money is paid to both of us based on the same die roll? Then the probability we each walk away with the same amount of money is exactly 100%, always, irrespective of n. Let me emphasize that this example is not meant to reflect on the actual probabilities in the Petersburg paradox (or even ratios of probabilities), but rather is intended to illustrate how problems change according to initial assumptions about the dependence or independence of events.
Also, the long-run advantage in aggregate expected payoffs of Game 2 over Game 1 is…undefined. I realize that this is intuitively unsatisfying, but there it is. The problem is that the size of the payoff keeps increasing faster than the probability of achieving them decreases; more to the point, this increase continues to infinity. If there were any finite limit, you would expect a payoff ratio of 1:2. However, since there is no limit, there is the tendency for single games in a large series to upset the applecart by introducing extremely large payoffs.
Put another way, the more often you play, the more likely high payoffs (with low individual event probability) are to occur. Thus, no matter how often you play, your total payoff is highly influenced by the presence or absence of individual, low-probability events.
Put a third way, the method I used in my previous post can be (fallaciously) used to show that the ratio of payouts from games one and two can be anything. 1:2 or 1:3 or 3:1 or 1:100000000. So you can’t even say that game 2 has an advantage over game 1. The only thing you can say is that you can’t say anything.
Like I said, this is intuitively unsatisfying, but unfortunately it’s the nature of infinity to not live up to our finite expectations.
Zut,
“So you can’t even say that game 2 has an advantage over game 1.” ??? What is your reaction to Cabbage’s post 594?
Tony
tony,
Ummm…sounds to me like Cabbage and I are saying the same thing. Unfortunately, this is confused because I’m not sure which post you are referring to (for some reason, all of his/her [sorry, Cabbage, I’m new] posts are #594). However, it seems to me that Cabbage is saying that, if you were to actually play these two games independently, you would find that the payoff ratio is inconsistent. Moreover, it will jump around in response to individual large payoffs in one or the other of the games, and you can’t predict where it will end up after x number of games, no matter how large x is. That’s pretty much what I’m saying.
The “post number” changes each time, to reflect the total number of posts by that poster. I know, I know, but complain about it in a different forum. That’s the way it works. So you can’t identify someone’s post by the number.
I think at this point everyone is agreed, except for Konrad, from whom we are waiting to hear now that we think we figured out the difference.
Tony, in terms of your latest question, whether we can’t even tell whether Game(1) or Game(2) is better… the answer is in my last post. In a strict mathematical sense, no, we can’t. The expected values are unbounded/infinite for both games.
On a finite game level, however, where we can only play the game a dozen times or so, we know that the distribution of likely outcomes favours the game that pays double. Especially if you’re using a computer simulation random-number generator that always follows the same sequence (which puts us in the case of really using the same coin toss for both games, rather than independent tosses!)
CKDextHavn, I might disagree with you, depending on what you meant. You said,
-
If you meant, “We restrict the number of tosses in a game to be less than some finite number” then I agree with your conclusion.
-
If you meant, “If we play both game(1) and game(2) a dozen times each, it is probable that the payoff from game(2) will be more than the payoff from game(1) more often than not” then I agree.
-
If you meant, “If we play both game(1) and game(2) a dozen times each, the total expected payoff from game(2) is more than the expected payoff from game(1)” then I disagree.
Upon rereading what you said, I suspect you meant option 2 above. However, there’s a difference between “How much more will you win with game(2)?” (which has no answer) and “How much more often will you win with game(2)?” (which, I think, does have an answer). Which, upon rereading yet again, looks like your point. However, the difference between these two, although not subtle at all in mathematics, is rather subtle in an intuitive, verbal exchange.
<< However, there’s a difference between “How much more will you win with game(2)?” (which has no answer) and “How much more often will you win with game(2)?” (which, I think, does have an answer). Which, upon rereading yet again, looks like your point. However, the difference between these two, although not subtle at all in mathematics, is rather subtle in an intuitive, verbal exchange. >>
Yes, fair enough. Common sense says that a game with a higher payout will probably pay more IF YOU PLAY OFTEN ENOUGH… but still a small (finite) number of games. Sorry for being sloppy in the wording.
Also, zut, I have removed your multiple postings. Once you hit SUBMIT REPLY, the reply has been sent… even though you may have to wait a bit to see it. Hitting SUBMIT REPLY multiple times, results in multiple posts. [grin] It is certainly a legitimate debating technique or political ploy, to repeat yourself over and over and over until your opponents give up in exasperation… but it’s not recommended here [/grin]