Gambling odds question.

I had a strange dream last night, I was in an Amazing race type show, and one of the challenges was gambling.

In the dream you were given 1000 dollars, and the goal was to make it to 2000. If you reached 0 you had to do the punishment task of wrestling a lion(hey it was a dream).

Anyway in dream world I easly calculated all of the odds and found that the best solution was to take the 1000, bet it all on blackjack for one hand. Which I hit fortunately. Now the odds I calculated were total dreamland BS, but I’m curious in real life how the odds work out. On one trial vs many.

So for example. Let’s say blackjack is a strict 48% win for the player per trial. Now in the play it all strategy you have a pretty obvious, 48% chance to win, and 52% chance to lose.

Now what are your chances to win if you bet half your starting total each trial(not half your current stake, allthough that would be interesting too) just 500 bucks each bet. Or 10%(100 bet each trial).

I’m doing something wrong in my infinite series, and getting stupid numbers. I’m sure someone must have figured it out before, but I can’t get it in a google. What are the odds of turning 1000 into 2000? It would seem that increasing trials against unfavorable odds should be bad strategy, but I can’t make my numbers prove it.

Oh and by the way, I wrote a program to run a bunch of trials, which confirm what I expected, I’m just curious about the right way to show it mathematically.

Isn’t this just a Martingale progression?

No it’s not Martingale. Every bet is the same ammount once it’s found. What I want to find is a formula for this totally contrived situation.

So S is the money you start with, and W is the money to win, and P is the probability that you will win any trial(for simplicities sake I 'm making a 2:1 payoff bet only game here, that wins on probablity P). I want to find the formula that gives me R where R is the constant bet I should start making to maximize my chances of winning, given S, W, and R)

I would guess that you maximise your chance of walking away with $2,000 by putting all the money on one bet. If you divide the money in two, and bet each $500 on one game, after those two games you have:
a 0.2304 chance that you have $2,000
a 0.4992 chance that you have $1,000
a 0.2704 chance that you have $0
That’s an expected value of $960, which is fine. If you then move on from that point with the middle value of $1,000, and play 2 more games, the expected value in that case will be $960 again. Each round you lose that 4%. So I believe that you’re better off by losing the 4% only once, by betting just once.

(On the other hand, if the expected value is greater than 100%, you should split the money into amounts as small as possible. That is, of course, what casinos do, since the odds favour them).

I haven’t worked out all the details, but for the simple $500 bet size case, the odds to double your money are:

P[sup]2[/sup] + n[sub]3,1[/sub]P[sup]3/sup + n[sub]4,2[/sub]P[sup]4/sup[sup]2[/sup] + n[sub]5,3[/sub]P[sup]5/sup[sup]3[/sup] + …

where e.g. n[sub]3,1[/sub] is the number of different ways one can have three wins and one loss, i.e. three. (LWWW, WLWW, WWLW. We throw out the WWWL case as you’d have stopped by then.) The more general formula for n would be the choose function minus one, i.e. n[sub]4,2[/sub] would be 6 choose 2 minus 1 or 6!/(2!4!) - 1 = 14. n[sub]5,3[/sub] would be 8 choose 3 minus 1 or 55, etc.

Decreasing your bet size would increase the number of steps needed to get to your goal, so the formula would become:

P[sup]x[/sup] + n[sub]x+1,1[/sub]P[sup]x+1/sup + n[sub]x+2,2[/sub]P[sup]x+2/sup[sup]2[/sup] + n[sub]x+3,3[/sub]P[sup]x+3/sup[sup]3[/sup] + …

where x is the number of wins it would take to reach your goal.

This is about where my formula started to fall apart. You have to subtract more than one. So I agree this is the point where you add in the 6 draw wins. Any 6 draw win has to be four wins, and 2 losses, otherwise it would have been a win or loss earlier or later.
but looking at the 15 cases most dont actually add in.
wwwwll Throw out, won early
wwwlwl Throw out, won early
wwwllw Throw out, won early
wwlwlw Throw out, won early
wwlwwl Throw out won early
wlwwwl throw out, won at 4 draws
wlwwlw throw out, won at 4 draws
wlwlww good, count it
wllwww good, count it
lwwwwl throw out, won at 4 draws
lwwwlw throw out, won at 4 draws
lwwlww good, count it
lwlwww good, count it
llwwww lost early, throw it out.

I only count the 4 good ones in there to add. not 14

It is very possible I’m missing something.

No, you’re right. There are definitely more combinations that need to be excluded – I had a feeling that my formula was a bit too simplistic, given the constraints on the problem. I’m not sure how to write a formula for “choose all combinations of e.g. five wins, three losses where the cumulative sum only reaches two at the last value”. It may be that you can show that only combinations with all wins in the last x places are unique and all other winning combinations would have been hit in the lower order terms, which would simplify the formula quite a lot…

Thanks, That’s a very mathematical way of putting what my issue ended up being. With each term in the series I was having to add more ‘fix-it-factors’ in an unprogammatic way.

Your intuition is right. A simple way to say it is that your mean losses are the same whether you bet $1000 in one hand or $100 in ten hands, but the variance in your losses is ten times smaller in the second case. This means that the probability that you’re up by $1000 is much smaller in the second case.

For the cases where the individual-hand bets are small compared to your total bankroll, you can use the Law of Large Numbers to argue that a Gaussian approximation to your bankroll after k hands is good. Integrate the tails of the Gaussian to estimate the probability that you’ve won or busted.

If you want to compute exact probabilities (e.g., for the cases where your bet is a significant fraction of your bankroll), you can do something like this: Let your initial bankroll be B and your individual-hand bets be b. Then your bankroll must always be B+nb dollars, where n is an integer (the number of wins minus the number of losses). If N=B/b (I will assume N is an integer), then you’re only interested in -N<=n<=N (smaller and you’re broke; larger and you’ve won, assuming you’re just trying to double your money). So you can set up a Markov transition table of 2N+1 equations relating the P(k,n) (the probability that you have B+nb dollars after k hands) to the corresponding values P(k-1,n) after k-1 hands:
P(k,n) = p P(k-1,n-1) + q P(k-1,n+1)
except that
P(k,N) = p P(k-1,N-1) + P(k-1,N)
P(k,N-1) = p P(k-1,N-2)
P(k,1-N) = q P(k-1,2-N)
P(k,-N) = q P(k-1,1-N) + P(k-1,-N)
where p=1-q=48% the one-hand win probability. That is, in each hand you either gain $b, with probability p, or lose $b, with probability q. Once you win or bust, you quit, so there is a deterministic transition from P(k-1,N) to P(k,N).

This can be expressed as a transition matrix equation, of the form P(k)=M P(k-1); for example, if N=2 (B=1000, b=500),



    [ 1 0.48  0    0    0 ]
    [ 0 0     0.48 0    0 ]
M = [ 0 0.52  0    0.48 0 ]
    [ 0 0     0.52 0    0 ]
    [ 0 0     0    0.52 1 ]

Now P(k)=M[sup]k[/sup]P(0), and P(0) is just a 1 at n=0 and 0 everywhere else, so you can calculate the win and loss probabilities for any number k of hands.

If you compute the eigenvalues and eigenvectors of M, you can also meaningfully compute M[sup]infinity[/sup] and figure out your long-term winning percentage. (Because M is a stochastic matrix, all of its eigenvalues are between -1 and 1, and it has at least one eigenvalue 1. For large k, M[sup]k[/sup] is dominated by the contributions for its 1 eigenvectors.)

For N of reasonable size, these calculations are trivial to do in something like Matlab or Mathematica. For the case N=2, for example, with p=0.48, you have



# of hands    Prob{win}    Prob{loss}
-------------------------------------
     2          0.2304       0.2704 [the case Giles mentioned]
     4          0.3454       0.4054
     6          0.4028       0.4728
     8          0.4315       0.5064
    10          0.4458       0.5232
infinite        0.4601       0.5399

:smack: Besides being confusingly written, this part is wrong. It only works if M is Hermitian (which, in this problem, is never the case). Instead, use something like a Jordan decomposition of M to efficiently compute M[sup]infinity[/sup] (or just find it numerically).