Mathletes: Two card-odds questions (blackjack and side bet)

borschevsky wins. Well done.

Yes, it’s counterintuitive as is often the case with conditional probability.

Check out a Monty Hall, two children one is a boy, or two envelope paradox (current) thread.

That checks out with what I have. The house’s odds of taking the black/red bet is equivalent to winning individual bets 24.068 in a row.

Chance of house winning 24 spins: 0.52727
Chance of house winning 1 b/r spin: 0.52631

Now, does the player have enough money to survive 24 spins? Yes. He’ll be left with .0338X where x is his original bankroll. So let’s say he bets that.

Odds of house taking everything: (37/38)^25 = 0.513396. But the player would only have 1.2169X, not 2X like we declared as a victory condition.

I get the feeling that these odds are only coming out the way they do because we’re using integer steps. If there were a way to do a .068th of a spin, then we’d see the numbers line up neatly. It’s the same either way. What we can definitely conclude is that playing a B/R bet is the equivalent of playing a number 24.068 times in a row. So of course the player that only plays 24 times walks away with more money in the end…he didn’t play as much!

Thank you. In the earlier thread, L.N. did all this arithmetic (using a hand calculater!? :eek: ) but then forgot that he could bet everything on the 25th spin! :smack:

There’s nothing special about doubling your money exactly. To answer other questions in the thread, the related approach is also the best way to multiply your money by 1.9, by 1.1, by 1000, whatever. Simply put, an N:1 bet is simply better for you than an M:1 bet whenever N > M and the “vigorishes” (as usually defined) are equal.

A question which arises is: Why is this counterintuitive? Well, it isn’t counterintuitive to me ( :smiley: ) and perhaps the easiest way to understand it, as I mentioned earlier, is that on average less of your money suffers the vigorish if you play correctly.

Here’s a much simpler example. You want to quadruple your money. Should you take a 3:1 shot, or back-to-back 1:1 shots? The chance of winning a 1:1 shot is 2p, twice the chance of the 3:1 shot. (Presumably, those still answering (a) to the puzzle believe it doesn’t matter which approach you adopt.)

The 3:1 shot succeeds with probability p. Your $1000 is exposed to the vigorish exactly once.

The two back-to-back 1:1 shots approach succeeds with probability (2p)(2p). 4p^2 < p, whenever p < 1/4, which here is the condition that the casino has positive vigorish. You’ve exposed $(1000 + 4000p) to the house vigorish.

Before someone plays what seems to be a favorite game around here (Let’s pretend septimus is wrong because he misplaced a comma), I should have written the (more confused-looking) rule

a 1 : (1/N) bet is simply better for you than a 1 : (1/M) bet whenever N > M and the “vigorishes” (as usually defined) are equal

I just scripted and ran a simulation of the roulette game. The first 10,000 trials showed that by betting on the 35:1 games, you get a 1.2% advantage. That is, the house’s edge is reduced by 1.20%. That sounded high to me, so I did it again with 100,000 trials and 47,648 people won. That’s only 280 better than the 47,368 black-betting winners.

So that betting strategy should cause you to get to your double-money victory situation 0.364% more often. If you and 99,999 of your friends all go to the casino, 364 more of them will win if they bet on numbers instead of red/black and stop when they’ve doubled.

We’ve calculated the actual probabilities here (or at least a close lower bound for one) so I don’t know why you’re using a simulation instead.

100000 people using the septimus strategy would expect to generate over 48000 winners.

100000 people using the red/black strategy would expect to generate over 47368 winners.

The difference is 632 winners. It happens 0.632% more often.

Rounding errors were creeping in to previous calculations so I had Mathematica compute a better lower bound without rounding at any point until I needed to express the answer as a decimal.

The lower bound for up to three desperation spins is:

(1 - (37/38)^24) + (37/38)^24 (1/38) (1 - (37/38)^33) + (37/38)^57 (1/38)^2 (1 - (37/38)^12) + (37/38)^69 (1/38)^3 (1 - (37/38)^1) = 0.48088966

The lower bound for up to four desperation spins is:

(1 - (37/38)^24) + (37/38)^24 (1/38) (1 - (37/38)^33) + (37/38)^57 (1/38)^2 (1 - (37/38)^12) + (37/38)^69 (1/38)^3 (1 - (37/38)^1) + (37/38)^70 (1/38)^4 (1 - (37/38)^91) = 0.48088973

The (1/38)^n factor is really starting to dominate the terms that will follow making them vanishingly small so I’m pretty sure that this is very close to the actual answer.

As a result I’m going to revise my previous post and say:

100000 people using the septimus strategy would expect to generate over 48088 winners.

100000 people using the red/black strategy would expect to generate over 47368 winners.

The difference is 720 winners. It happens 0.720% more often.

Three reasons:

  1. An empirical demonstration,
  2. To get an exact number,

and the most important reason,

  1. I have fun writing little programs.

I believe you.

An exact solution is both simpler than a Monte Carlo simulation and … well, more exact.
Posing the question elsewhere one response linked to a very simple program which solves the problem exactly for any three parameters (goal, payoff and vigorish).

Guys, we have to distinguish between two questions here.

One question is: Does the betting pattern (I personally dislike the grand term “strategy” because it sounds so elaborate) described by some posters here increase the probability of achieving a given result, such as doubling your money?

The second question: Does the betting pattern decrease the house edge?

The two questions are related, but not identical. You might be able to change the chances of doubling your money, but you can’t change the house edge. The calculations presented so far have been concerned solely with the first question.

Consider this:

The odds of winning color are 20/38 for the house. The odds of winning any N consecutive spins by the house is (37/38)^N. If you set the two equal to each other, you can see that N=24.068, meaning that the house is just as likely to win 24.068 spins as it is to win your color bet.

Question: What if we had a game where the N’s decimal was .97? What if it’s .0001? How does that affect the ‘benefit’ derived from playing the multiple-spins strategy, henceforth known as the Septimus, for brevity.

Question 2: Is it really the decimal that’s relevant? Or is it the whole number? That is, does the benefit of the Septimus come from the fact that the decimal is low (.068), or the fact that the whole (24) is high?

A casino offers two games:

Game A pays 1:1 and you have a 47.368% chance to win.

Game B pays 1:1 and you have a 48% chance to win.

Is your claim that the house edge is identical for both games?

The concept of optimal stopping seems to be closely related to what we’re talking about here.

The relevant quote is, “In mathematics, the theory of optimal stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost.”

What we’re doing is choosing a stopping rule that minimizes our expected cost.

I’ve thought of a simpler example that should illustrate my point. The expected number of heads per flip of a fair coin is 0.5. If I flip it ten times I expect 5 heads in ten flips which is 0.5 heads per flip. If I flip it 2n times I expect n heads in 2n flips equal to 0.5 heads per flip.

Now let’s play a game where I flip a coin until I get a head and then stop and calculate the number of heads per flip. Clearly, I am going to get exactly one head so this is equivalent to 1/flips.

The probability of getting a the first head on the nth flip (1/2)^n and we expect 1/n heads per flip.

So our total calculated heads per flip is the summation (1/2)^n (1/n) for 1 <= n < infinity. The power series for -Ln(1-x) is summation (x)^n (1/n) for 1 <= n < infinity. Thus our sum is -Ln(1-(1/2)) = Ln(2) = 0.693147 (approx).

So we have increased the expected number of heads per flip of a fair coin by using a stopping rule that optimized our expected value.

Now maybe you don’t trust my calculations above. This is a fairly easy thing to simulate and the answers are far enough apart that a simulation should be convincing.

In fact, I’ve gone to the trouble of writing a simulation for you (Mathematica):


OneTrial[] := Module[{flips = 1},
  While[RandomInteger[{0, 1}] == 0, flips++];
  1/flips]

Mean[Table[OneTrial[], {i, 1, 1000000}]] // N

This program runs and averages one million trials. Its output was 0.693579 the first time I ran it.

I would say so, but it’s a matter of semantics. The only reason the Septimus works is because some of those people don’t have to bet as much. Some of them get to walk away having only made one bet. But either way, the house takes a certain percentage of money that hits the table. So in that sense, yes, the edge is identical.

But like I said, it’s semantics, not math.

But anyway, what can you tell me about the questions I’ve posted above?

(I’ll mention, without getting rigorous, that you’re positing a “benefit function” B(), here B(24.068). The way you’ve set up may be flawed – I think it’s the net payoff that matters not the probability – but we can gloss over that. :cool: )

It’s the whole number. (I think you’re approaching in an over-complicated way, but each of us has his own intuition.)

The “decimal” residue is almost irrelevant. (I say “almost” irrelevant, because that residue, as you know, introduces an “endgame complication” which means the function describing the benefit won’t be perfectly smooth; that is B(24.5) or even B(24.0068) will be slightly below the analytic curve connecting B(n) for the integers n. However that bumpiness is much too small to have much relevance in the general argument.)

This is our optimal stopping rule in action working for us to minimize our expected loss.

On review, I see I’ve misunderstood your question and/or misgiven the response. :cool: When I wrote “whole number” I meant “entire number.”

Suppose the payoff or goal is such that you win if any of 24 bets come in, otherwise your capital is completely exhausted. Let B(24) denote a measure of efficacy for that betting situation. Similarly, B(25) would be the measure if the payoff/goal allowed you exactly 25 chances.

I think you’re concerned about the case where 24 bets doesn’t completely exhaust your capital, but leaves you inadequate funds for a finally-successful 25th bet. (I may be quite wrong that this is your concern; if so, ignore these two responses! :wink: ) If applying a simple goal/payoff analytic function yields, e.g. 24.5 as “number of chances” then your benefit B(24.5) will be about halfway between B(24) and B(25), though slightly below the smooth curve due to the complication I alluded to earlier.

(As I write this, I have increasing concern that I misunderstood your question in the first place … but I must correct my misleading/incorrect prior response anyway!)

From the wikipedia page on roulette the house edge for betting a single number is calculated as follows:

−1×37/38 + 35×1/38 = −0.0526 (5.26% house edge)

This is (amount we lose)(probability we lose) + (amount we win)(probability we win).

Now let’s consider the septimus betting pattern with the twist that if we don’t win in the first 24 spins we quit.

Probability we win: 1-(37/38)^n
Probability we lose: (37/38)^n

Note: these probabilities obviously sum to 1.

Amount we win: 1
Amount we lose: 1- (36/35)^24

Note: the amount we lose is the sum of our 24 bets and I’m using the geometric sum formula here. The result is greater than -1 because we have a little money left over.

So house edge is:

(1- (36/35)^24) ((37/38)^24) + (1)(1-(37/38)^24) = -0.0367 (3.67% house edge)

The difference in house edge is due to optimal stopping which in this case is equivalent to betting zero on every spin after we hit our number should that event occur.

No, you basically nailed my concern. It was that the casino has an equal probability of winning all your color money as it does beating your number money 24.068 times in a row. And that .068 sticks out at me. At first grok, it looks like the player “deserves” 24.068 spins but the casino is forced to give him 25 full spins before he’s bankrupted. So it looked to me like that .068 is really advantageous because it’s just a tad more than 24 but he gets a full extra spin. That’s a lot of rounding up, in other words.

So I thought that if the decimal is even lower, if that might boost our “improvement” (Is that was a benefit function is?) even higher. That is, instead of the extra 400-something winners that my 10,000 friends get by playing the Septimus on roulette, if there were a different game where the # of spins came out to be, say, 24.0000000001, we could get maybe 500-something extra winners (than the even-money bet).

But I see, upon further thinking, that it’s actually worse. The higher the number, the better, because there’s better chance of me quitting early if the house has to beat me more times in a row. So B(25) is better than B(24.068) even though there’s no “rounding up” that I previously thought was the source of the player’s benefit.
Now, please explain this “benefit function” you keep talking about. Is there a graph somewhere on the internets?