Math wonks: A question about probability in a 'coinflip' poker tournament

I sure don’t. I will ultimately be wagering the same amount in either scenario, whether it is 18 bets on one number or one bet on 18 numbers. So if my expected loss is constant no matter how I bet, then it should be the same in both scenarios.

It only makes sense to me if you are expecting me to choose to stop betting should I reach a certain return, meaning I risk less money if I win early on. If that’s the case, then obviously many smaller bets is better. But only because there’s now a chance I will not bet the entire amount.

To use the coins that brickbacon attempted to use to simplify the problem: if I have $2 to spend on two coin flips, it’s better for me to bet $1 even money on heads for the first flip, and then, if I win, walk away now with $3, than it is for me to bet $0.50 even money on both sides for both flips, where I will just get a guaranteed $2.

In scenario 1, my chance of winning more than I started with is 1:2. In scenario 2, there is no chance at all.

Assume there is a bystander that is willing to pay you C to not play roulette. C = B-A-epsilon where epsilon is some arbitrarily small value. Using your definition of better bet, it is always better to walk away from free money and to make a wager with negative expected value.

Looking at it from an expected utility standpoint: assume log-utility, although any decreasing marginal utility works, it’s better to guarantee a loss of $2 rather than probably a loss of your entire bet. This, to me, seems to be a better way to frame the question than to assume you need an arbitrarily larger amount of money or else some nebulous bad thing will happen. (What about “I need to leave Vegas with at least $D where D is less than A, but enough for a bus ticket and for the old lady to not throw me out”, a much more common situation)

Regardless, there are multiple subjective ways of looking at it. All valid for different needs, so claiming that it’s proven to be better to bet with higher variance is odd.

The situation given is not the same as the roulette situation. Playing roulette, your chance that a given number will come up is 36/38 so if you are wagering dollar stakes, you will get paid an average of $36/38 per turn (on an American wheel. In Monte Carlo they recognise that a mere 3% house edge is more than enough to get rich on, and they don’t need to be so greedy).

If you bet on one number, it costs you $1 to win your $36/38.
If you bet on two numbers, it costs you $2 to win your $36/38. Etc.
More later.

First, you said 36, not 18. Second, if you think betting $18 on one number is the same as $1 on 18 numbers, then what are we arguing about and how do you defend your initial claim?

No, I don’t understand the point you were trying to make. Why is it better to bet $2 on one number than $1 on two numbers?

Your first formula is correct but the second formula ignores the fact that you cannot win 1st place twice in one week. It should account for the fact that you have a chance to win both 1st and 2nd place.

So the answer to the OP is that it is better to enter once each week than to enter twice every other week because it is better to have two chances to win 1st place than it is to have one chance to win both 1st and 2nd place.

Note: I was a little rushed posting this. I suspect it’s complete tosh. More later… maybe.

Meanwhile, the situation in the OP breaks down quite simply: If you enter twice in the same week there is a small chance that one of your entries competes with the other, knocking one of them down to a lower place. If you enter once per week that won’t happen. The probability may be small, but there’s no compensating advantage, so it’s a marginally losing option.

Here’s the simplest scenario to clarify the essential truth:

*Mr. K bets all $1000 on a group of 9 numbers. He walks away $3000 winner 23.6842% of the time (9/38). Mr. J, OTOH, bets $1000 on a group of 18 numbers, e.g. Red. If the bet wins, he lets it ride. Mr. J walks away $3000 winner 22.4377% of the time (18/38 * 18/38). Mr. K’s approach was clearly better than Mr. J’s (though not nearly as good as if he’d started with an $85 single-number bet and gone on from there.
*
The claims that I made in #14 were very specific and were either true or false. In fact, they are true. Their truth can be demonstrated arithmetically, with simulation, or by an argument which appeals to intuition (but can be made rigorous). The simple scenario at the beginning of this post should point the way. At least two respected SDMB mathematicians have already posted in this thread; I will be interested to see if they return and address #14 specifically.

If Mr. J and Mr. K each end up betting a total of exactly $1000 each at American roulette, then each will end up losing $52.63 on average. That much we all agree on. And we also agree it makes a difference how they bet. I’ve already shown a “strategy” for Mr. J to arrange that he always loses exactly $52.63 ! Mr. K, OTOH, can make a single $1000 bet on Number 17 and have a chance of walking away with $35,000 profit.

But in the scenario relevant to my #14, the total wagering is variable. And in the scenario at the top of this post, Mr. K wagers exactly $1000, while Mr. J wagers on average $1947.37. That’s one way to understand why Mr. J’s strategy is inferior!

The underlying point, which leads to the correct answers given to OP’s original question by #2, #6, #7, most clearly #8, is obviously well known, but its application to specific scenarios like my roulette examples surprises most people (even good mathematicians) when they first learn of it. If you refer to the earlier thread you will see that nobody accepted the claim at first but eventually most came around. Lance Turbo, who presumably has his PhD in mathematics by now, took my side … though the argument went on and on. (If you do review that 5-year old thread, as I did now, you’ll see that septimus’ irritation might have become the most salient detail! Let’s hope that doesn’t happen again. :o )

The number of times I’ve typed the number you mention (between "First you said " and “not 18”) is ZERO, Nada, Zip. I’ve never typed that number in this thread. I have mentioned 38 several times; are you referring to one of those mentions?

In the scenario which heads this post, I demonstrate that betting on 9 numbers is better than 18 numbers. The same principle applies to one number versus two numbers, but to demonstrate it with detailed arithmetic would be far more tedious.

Does this help?

I think the best strategy is to save your entries and wait for weeks where there are smaller number entries.

Thanks for remembering. I do, in fact, have my PhD in mathematics now.

I have since learned that your strategy is called ‘bold play’ and it has been proven to be optimal.

Man that earlier thread was frustrating.

I don’t think that matters, at least as far as your expectation is concerned. If you split your entries, it’s true that you have the chance to get first place in both draws. But you also have the chance to get 3rd in both draws, or 4th in both, etc.

Take an example. Say there are two prizes, $4000 for first place and $1000 for second, and say there are 10,000 total entries in all cases.

If you put one entry in each draw, then your expectation is:

$4000 * 1/10000 + $1000 * 1/10000 + $4000 * 1/10000 + $1000 * 1/10000 = $1.

If you put both in a single draw, then your expectation is:

2/10000 * 1/9999 * $5000 +
2/10000 * 9998/9999 * $4000 +
9998/10000 * 2/9999 * $1000
= $1

In the first case, your chances of winning nothing are 9998 / 10000 * 9998 / 10000. In the second case, your chances of winning nothing are 9998 / 10000 * 9997 / 9999 - slightly less.

To expand on my post #28. If the prize pool is fixed at $5000 the expectation of an entry is $5000/N where N is the number of players.

If one week there are 25k players and the next week there are 15k players, playing one entry each week has an expectation of $0.53. Playing two entries the second week has an expectation of $0.67. For completeness, playing two entries the first week has an expectation of $0.40, but this is obviously a bad move.

You are changing the scenario by letting it ride in the second scenario, and changing the amounts that were bet. The issue is simple. Use a die for simplicity. Why would you be better of betting $100 on one number once than $50 on each of two numbers? In the former, you have a 1/6 chance of winning. In the latter you have 2/6 chance or winning half as much money. Why is the expected return different? Is there some roulette rule that changes the odds? Please point out where the difference lies?

Who are you talking about? You explicitly mentioned Chronos before, and he said:

That doesn’t seem to back your understanding of the issue. Perhaps he’ll be back to clarify.

Sorry, I meant 35.

The expected return is NOT different, if your total wager is the same. (But expected return does not tell the full story.) Debate, if you wish, the claim I made in post #14, not some claim you imagined I made.

But that is what you said, no? To quote you:

Are you now disavowing this? Please defend the above?

I explained and proved explicitly in what sense it was better to bet 9 numbers than to bet 18 numbers. Did you understand that discussion?

I suggested that a similar argument would apply when {9; 18} are replaced with {1; 2} (but the detailed arithmetic proof might be more tedious). If you understand the {9;18} discussion I ask you what your intuition tells you about {1; 2}.

I don’t think you have proved that. Where is this proof that doesn’t change the scenario. You also said:

To which MikeS correctly responded:

Why are we both wrong? And please don’t add in something about letting your winnings ride, betting different aggregate amounts, or whatever. Please address the original claim.

No. You don’t have two chances to win 1st AND two chances to win 2nd. The times you do win 1st place you no longer have two chances to win 2nd place; you then only have one chance to win 2nd.

Right, that’s where the first line in that calculation comes from. 2/10000 of the time, you win the first prize. When that happens, 1/9999 of the time you also win second prize, for $5000 total; the other 9998/9999 of the time you don’t win second and only get $4000.

In the 9998/10000 cases where you don’t win first prize, 2/9999 of the time you win second prize and $1000.

There is one argument which might help the intuition to grasp the apparently paradoxical-looking observation I made about roulette bets.

Your expected loss is exactly 5.26% of your wager no matter how you bet. But let’s turn it around and consider it from the casino’s point-of-view. On an even-money bet, their expected winning is exactly 5.26% of the money the casino wagers. But, if a customer bets $1 on a single number, the casino is wagering $35 to win $1. If you calculate the house advantage on the $35 it wagers, you will find it to be 0.15% – much much less than 5.26%.

For some, seeing how much different the bets are from the casino’s standpoint may lead to an Aha! moment and to enlightenment! (And it will help explain why high-odds games like Keno have higher vigorish than even-money bets.) If not … forget it. As I said, we discussed this to the point of irritation and exhaustion in the earlier thread.

Let’s not let septimus get frustrated! I am certainly correct; I do not think any mathematicians will appear to say otherwise. My arguments need to stand on their own merits, but I suspect you’d be more eager to learn rather than teach if you knew my credentials. I’m just another asshole on a message board here, but on some subjects I do know what I’m talking about. :smiley:

brickbacon, did you understand and agree with the example I present at the top of post #27? I show that if you want to multiply your bankroll by 4, a 9-number bet is better than is an 18-number bet. Please decide Yes-or-No, whether you agree with this before proceeding.

This claim is about three numbers, < 4, 9, 18 >. Let’s consider the same claim, but the general case <a, b, c>. I assert that the b-number bet will be better in any of these cases than the c-number bet, when b < c. If you’re still with me, do you think that this statment is also true?

I have tried to be careful in the phrasing of my posts in this thread, but perhaps you’ve misinterpreted something I wrote. Obviously I don’t want to defend a claim that I never made. If you think you’ve got some Gotcha, where I phrased a claim poorly, sorry.

We’ve had one PhD in mathematics show up and agree with me. If I get 97% consensus from mathematicians will you acquiesce, or will this become like the climate change debate? :smiley:

Folks are discussing different scenarios. The key in septimus’s scenario is that the player is trying to maximize the probability of reaching a certain gain (a “stop win”). The optimal strategy at a roulette table is indeed a series of small bets for that scenario.

Anyone not overtly taking the stop-win condition into consideration is examining a scenario other than the one septimus is outlining.

You are in the same boat if the goal is to maximize your expected return over some large number of flips. That isn’t the goal in septimus’s scenario. The goal is to maximize the probability of reaching some pre-determined total winnings.

If you bet $1 on heads and $1 on tails everytime, your expected winnings are zero and you have zero chance of ever earning (say) $5.

If you bet $2 on heads and $0 on tails everytime, your expected winnings are still zero but now you have some finite chance of earning (say) $5.

If your goal is to earn $5, the second strategy is clearly better. This concept extends into the roulette case.