Mathletes: Two card-odds questions (blackjack and side bet)

I can’t find one, though my Google-fu is poor. I did stumble across this discussion which links to a C program you can run iteratively to generate a graph if you wish.
(I’m afraid that if my comments erred on the side of intuition without math details, the website linked to may err the other way. :smiley: )

Here’s a question:

American roulette table (38 numbers). You can only place outside bets (These are all 1:1 or 2:1 payoff). Same setup as presented above - you have $1000, and wish to make $2000, and you can bet any value. What’s your best strategy?

I’ve found a way to improve the septimus method.

Bet 1/35 of the difference between your goal and your stack if you can afford it. If you can’t afford it place your entire stack on a color.

A lower bound for the probability of doubling your stack is:

(1 - (37/38)^24) + (1/38) (18/38) (37/38)^24 + (1/38) (18/38)^3 (37/38)^25 + (1/38) (18/38)^4 (37/38)^26 + (1/38) (18/38)^5 (37/38)^27 + (1/38) (18/38)^7 (37/38)^28 = 0.481771 (approx).

This was in response to:

Game A pays 1:1 and you have a 47.368% chance to win.

Game B pays 1:1 and you have a 48% chance to win.

I was going to let this go but since I’m posting anyway I might as address this. Would you say that game A and game B had the same house edge if they didn’t take place on the roulette table?

Based on the above my guess is that you should bet 1/2 the difference between your goal and your stack on a 2:1 bet when you can afford it and bet your entire stack on a 1:1 bet when you can’t.

And my guess is incorrect. The probability of succeeding with the above 456/976 = 0.467.

With equal vigorishes, the bet with highest payout is always best. (I’m beginning to think no one reads my posts or links :D) Your first moves in the example are to bet $500 on a 2:1 shot, another $500 if it loses, $250 at your 3rd turn if the 2nd bet wins.

With equal vigorishes, the bet with highest payout is always best. Indeed, the local goal in your example (doubling your bank) is precisely how the claim of the subthread was phrased.

(Caveat: This is all for American roulette. In Europe the “en prise” rule means even-money bets have less vigorish than other bets.)

Are you certain of this? You may wish to check the numbers.

I’ll be convinced if you can show or find an actual proof (the most recent link claimed it offhand and gave an example, but never actually proved it in general).

I’m too tired to go through the whole process, but my guess is that using this strategy for a given goal (but probably not dependent on the vigorish unless it is unreasonably large), there is a critical ‘minimum probability’ payoff, with probability increasing above or below it. I strongly suspect that for G=2 it’s equal to phi (~1.618).

In terms of exposure, the thing that needs to be taken into account is the probability that some amount of money will be bet. With higher payoffs, the exposure is less only if the probability of more money being bet is less than for a different payoff. It may be useful to look at the distribution for the total amount of money likely to be bet (i.e. P(x) is the probability that x or more dollars will be bet in the game). I would hazard that the mean (expectation) of this distribution is lower for the payoffs that are better.

Oops!! I think my face is turning red! :o
Even the Javascript form at the linked-to page confirms that betting everything on the even-money shot is best in your example. The interface here doesn’t allow a giant red face icon, but I’ll try anyway: SIZE=“9” :o /SIZE

In a (vain?) effort to save face, let’s seek an area of agreement. The linked-to page considers three cases:
(a) “Betting Entire Bankroll”
(b) “Bet the Goal-Reaching Minimum”
(c) The case where cases (a) and (b) are applied successively

It then claims (without even bothering to wave its hands :cool:),
(d) With equal vigorish, higher payoff is better.

Do you accept (a), (b) and (c) as describing optimal strategies in the one-payoff case? I think they do, but will try to flesh out a rigorous proof for one of these cases if you wish.

A problem arises when (c) introduces “jitter” into the benefit. Thus, in the original subthread puzzle (double your money using the 36-for-1 payoff), you end up having 24.6 chances. Had the payoff been less, 35.13-for-1, you’d have 24.0 chances so wouldn’t need to worry about the jitter of case (c).

The Javascript form at the page you refer to does indeed seem to confirm that 35.13-for-1 would have given a slightly higher win probability than with a 36-for-1 payoff. Despite that this contradicts a claim made on the same webpage! (I hope you e-mail the guy and tell him this, panamajack; I’m feeling too humiliated to do it myself. :frowning: :o :smack: )

Your comment about “‘minimum probability’ payoff, possibly phi” intrigues me, but isn’t fully clear to me. I’ll try to figure it out. In any case, thanks for fighting my ignorance! :smiley:

Of course not, but that is not at all the constellation we’re talking about. Your “strategy” is not equivalent to a 1:1 bet with a 48 % chance of winning, and a simple all-or-nothing bet on an even money chance is not equivalent to a 1:1 bet with a 47.368 % of winning.

To see that this is not the case, simply look at the money you have to bet with your “strategy”. It’s not just a matter of wagering $1000 on a shot and get $2000 back with a 48 % probability. What you’re doing is betting a varying amount of money over a number of statistically independent random events.

You are clearly not paying attention.

Is that all you have to say about that?

septimus, I’m in agreement that most of what’s on that page is correct. It’s just the last bit that may be in error. (Again, I’m not certain that it’s false to say the higher-risk bet will be better. It’s just that we haven’t seen it proven, and also now know that this particular strategy is not how to achieve it.)

I’m curious about the 35.13 thing, though. My simulation suggests a monotonically increasing probability once you’re past the minimum (by preserving bankrolls throughout - possibly there’s an error in either his or my procedure). I don’t have time to check it, but I think if his formula for the “refashioned vigorish” is correct, you could try to determine the maximum value for a constant goal and vig.

I’ll post my simulation (Ruby code) in the next post. Sorry I’m lacking the time and energy (I have an ear infection) to look at this some more, as it’s an interesting problem.

The only other thing I’d add is that from a practical point of view, this strategy takes a lot longer for only a marginal increase in effort. If we posit an emergency situation (you must get 100,000 DM in 20 minutes or your boyfriend dies, for example), the extra time it takes to play this game would likely make the single-spin outcome preferable.

Here’s what I put together. It simulates septimus’s strategy. When you get down to your last spin, it bets it all, and then calculates the next value. The way it decides to terminate is a little wonky, and it could be made non-recursive so it doesn’t crash if you try absurdly high values, but it’s what I’ve worked with.

I put in an alternate “desperation spin” value but it doesn’t seem to bear out what you did Lance Turbo. It looks to me like your expression is missing some exponents in the later terms, or I’m not reading it correctly.



module Reach_Goal
        
    PAYOFF = 36
    GOAL = 2.0
    STARTING_BANK = 1.0 # Don't change this value
    
    VIG = 2.0 / 38
    PROB = (1.0 - VIG) / (PAYOFF + 1)
    
    DESP_PROB = PROB
    DESP_PAYOFF = PAYOFF
    
    # Testing alternate "desperation" attempts
    #DESP_PROB = 18/38.0
    #DESP_PAYOFF = 1
    @@play_count = 0
    
    def total_plays 
        @@play_count
    end
    
            
    # Determine the most number of plays possible
    def most_plays(bank)     
        plays = 0
        while bank >= (GOAL/(1.0 + PAYOFF))
            plays += 1
            stake = (GOAL - bank)/PAYOFF.to_f
            bank -= stake
        end       
        @@play_count += plays
        [plays,bank]
    end  
      
    TOL = 0.00000001
        
    # Iteratively determine the probability of meeting goal.
    # Strategy is to keep placing bets until a goal-achieving win occurs
    # If the goal cannot be met, all remaining money will be bet
    
    def p_meet_goal
        @iterations = 0
        p_recur(STARTING_BANK)
    end
       
    
    def p_recur(bank)
        @@play_count += 1
        @iterations += 1      
        puts "Iteration : #{@iterations} with $#{bank * 1000}" #DBG
                
        best_plays,remaining_bank = most_plays(bank)
        p_fail = (1.0 - PROB)**best_plays
        p_win_early = 1.0 - p_fail
        
        if (p_fail*PROB)**@iterations < TOL
            return p_fail*PROB*(1-p_win_early)
        end
        
        p_win_early + p_fail * DESP_PROB * p_recur(remaining_bank*(1.0 + DESP_PAYOFF))
    end
end

include Reach_Goal

puts "Chance to meet goal : #{p_meet_goal}"
puts "Total plays : #{total_plays}"


I don’t know what else to say. You’ve stated a number of demonstrably false things in this thread. Things that have been proven false over and over. I’d like to be able to convince you, but you seem to be immune to facts.

Which “facts”? The fact that you can change the house edge? You can’t. You and others here have presented lots of impressive calculations, but the interpretation that these demonstrate that the house egde can be changed is false.

You don’t seem to believe me, but maybe you believe authorites from literature. Books where the statement that no strategy can change the house edge in a Roulette-like casino game with fixed rules is made explicitly include:

  • The Unexpected Hanging by Martin Gardner (unfortunately not available in full on Google Books, but I have a copy of it at home, and you might wish to check it);

  • This

  • this

You will now change the reputability of the last two books, but at least Gardner’s credentials should be beyond doubt. Is he just too dumb to have the same insights as you?

I am presenting an airtight mathematical argument. No amount of appeals to authority can refute it.

Present your own mathematical argument, show where mine is wrong, or accept the truth.

Here is my argument one more time broken into two statements.

  1. A game that pays 1:1 with a 48% chance of success has a house edge of 4%.

  2. Playing roulette with the septimus betting pattern is equivalent to the game described in 1).

Which one of these statements do you disagree with? Why?

Two comments on your code.

(a) Your code assumes “P to 1” payoffs (rather than “W for 1”), yet is set to PAYOFF = 36. You’ll need to use PAYOFF = 35 for the ordinary roulette bet, and PAYOFF = 34.13 for what I was calling the w = 35.13 case.

(b) Your code uses the termination condition

if (p_failPROB)**@iterations < TOL
return p_fail
PROB*(1-p_win_early)

Rather than trying to understand this, I substituted the much simpler termination condition

if @iterations > 20
return p_win_early

(I don’t claim this is a “correct” or best termination, just “good enough” for demonstration here.)

With these two changes your code produces output essentially identical to that from the website.

(BTW, I also have recently developed some problem in my left middle ear. Don’t worry that thinking on this puzzle causes ear infections! … Mine predates this thread.)

Attentive reading of what I said so far should answer this, but I will repeat these statements again for the sake of clarity. What I disagree with is statement 2 (I made that explicit in post 88).

Statement 1 is indeed true: The expected value of a game is the sum of the products of the payouts in each possible outcome and the associated probabilities. In the game you describe in statement 1, we have two possible outcomes:

  • a payout of 2 with an associated 0.48 probability
  • a payout of 0 with an associated 0.52 probability

So the expected value in this case is 2 * 0.48 + 0 * 0.52 = 0.96

The house edge is the difference between this and 1, thus 4 %, so I agree with your statement 1.

By the way, I also agree with the statement (which you haven’t explicity made, but which is underlying the entire discussion) that the house edge for a simple all-or-nothing bet on even money is 5.26 %:

  • we have a payout of 2 with a probability of 18/38
  • and a payout of 0 with a probability of 20/38

So expected value is 2 * 18/38 + 0 * 20/38 = 36/38

House edge is thus 1 - 36/38 = 5.26 %

The house edge is the same for any other bet in Roulette, including a bet on a single number, where we have

  • payout of 36 with a probability of 1/38
  • payout of 0 with a probability of 37/38

→ House edge: 1 - 36/38 = 5.26 %.

This house edge applies to every bet possible in American Roulette. We could calculate that for each bet that is offered in American Roulette, if you want, but the result will be the same. This is simply because the payout schedules have been calculated by dividing 36 by the number of numbers covered by a bet (an even money bet, for instance, covers the 18 red numbers, or the 18 black ones, or the 18 odd ones, or the 18 even ones; a column bet covers 12 numbers, a bet on a horizontal line covers three numbers, etc.). The total number of numbers is, however, 38. The house edge stems from the remaining 2/38.

I think we are on common ground so far, aren’t we?

But, again, the game you describe in your statement 1 is not logically equivalent to the betting pattern described by septimus. The latter pattern consists of a series of bets, with varying stakes and outcome probabilities statistically and stochastically independent of each other. That is not what you describe in your statement 1, which assumes a one-shot game with a payout of 2 and an associated probability of 0.48. Or, to look at it from a more intuitive angle: Since septimus’ betting pattern, like any other betting pattern in American Roulette, does nothing but combine individual bets, all of which comes with a 5.26 % house edge in itself, how can the house edge for the total game differ from 5.26 %?

Would you agree that a game with exactly two possible outcomes, either win or lose $x, where the probability of winning is 48% and the probability of losing is 52% is equivalent to a game that pays 1:1 in which you have a 48% chance of winning and a 52% chance of losing?

Let’s do this as precisely as possible.

  1. Definition: The expected value of a random variable is the weighted average of all possible values that variable can attain where the weights are the probabilities of attaining each value.

  2. Definition: House edge is the expected value of proffit/loss expressed as a percentage (usually made positive) of of the amount wagered.

  3. Now consider the septimus betting pattern where we are risking $x. Proffit/loss can take on two values $x and -$x with probabilities 0.48 and 0.52. Thus the house edge is (0.48x - 0.52x)/x = -0.04 => 4% house edge.

Which, if any, of those statements do you have a problem with? I’m guessing 3), but I’m curious to see how you justify it since it follows directly from widely accepted definitions.

I think the problem is that you are only looking at subsets of the set of all games of roulette.

Your arbitrarily chosen stopping points don’t change the fact that the house edge is 5.26% on each and every bet, regardless when they are made or for what amount.

I’m not sure, but I think the discrepancy might come from the fact that many of your end points will not be an exact doubling of the bankroll but will be somewhat more, thus making it appear that the house edge is lower than it actually is.