This is not for homework. This is not for work work either. This is related to some stuff at work that I’ve already solved numerically, but I’m wondering if there’s an elegant algebraic solution, and hoping that a more mathematically astute Doper can lend a hand.
Let me introduce Gary the Gambler. Gary gambles, but he’s not addicted to gambling. To prove to himself that he’s not addicted, whenever he goes out for a night of gambling, he sets a limit on the number of games he’ll let himself lose. Once he’s lost that many games, regardless of how many he’s won, he’ll quit for the night and head home to play Farmville. But some nights he won’t hit his limit. Maybe he gets on a winning streak until it gets to be closing time at the gambling den, or maybe the DVR is busted and Fringe is on, or he gets kicked out for excessive punning or whatever.
What we’d like to know is, assuming we know the maximum number of games he’d be able to play on a given night, and assuming he has a constant probability of winning a given game, how many games do we expect him to lose?
Let’s call the probability that he’d win a given game a. For convenience, we’ll call the probability that he’ll lose a game b even though by definition it’s merely 1 - a. The maximum number of games he can play that night we’ll call m and his self-imposed loss limit we’ll call L.
A while back, Omphaloskeptic (if I recall correctly) mentioned in this forum that if you consider the generating polynomial (bx + a)[sup]m[/sup], the probability that there will be n successes (in our case, losses) in m trials will be equal to the coefficient for x[sup]n[/sup] when you multiply out the terms. For example, if Gary will only have a chance to play 5 games that night, then the probability that he’ll lose exactly 3 games is (5 choose 3) * b[sup]3[/sup] * a[sup]2[/sup]. If a = 75%, then there’s a 10 * 0.25[sup]3[/sup] * 0.75[sup]2[/sup] =~ 8.8% chance that he’ll lose 3 games.
To calculate the expected number of losses, then, we need merely multiply each coefficient by the number of losses associated with that term and sum up those values. For a given term where n <= L, this is multiplying the coefficient by the power of x that it’s associated with. When m <= L, this turns out to be calculating the derivative of our generating polynomial with respect to x and solving for x = 1.
This breaks down, however, then m > L. I’ve been trying to think of some kind of manipulation of Omphaloskeptic’s generating polynomial that will somehow remove the terms that have n > L and put back in the coefficients for those terms but multiplied by x[sup]L[/sup]. But I’ve had no luck, having neither a strong enough mathematical background nor powerful enough Google-fu.
Any ideas?