Against roughly equal competition, player one has won 90% of his games. and player two has won 75% of his games. Is there a quick and dirty way to calculate the odds? Obviously, player one has a less than 90% chance of winning since he’s playing a much better than average player, but player two has a far less than 75% chance of winning since he’s up against someone better than he is.
I realize that calculating precise odds will depend on particularities of the game or matchup, but I assume there’s a basic equation?
This isn’t good, since you don’t get a consistent result if you use the complementary probabilities of losing.
Better to use the win/lose probability ratios:
Player one: 0.9/(1-0.9) = 9 times more likely to win than lose matches overall
Player two: 0.75/(1-0.75) = 3 times more likely to win than lose matches overall
Take the ratio of these 9/3 = 3 for the odds ratio of this matchup
then solve for p where p/(1-p)=3
=> probability of player one winning this match p = 0.75
It’s a lot more intuitive (and quite possible to do it in your head) if you worth through the calculation the way I showed it. It’s the same calculation as the long formula in your link, but I’m a little skeptical that a calculation like this using odds ratios deserves a special name.
That’s definitely quick and dirty, although intuitively the odds seem a bit too favorable to player 2, giving him a 45% chance of winning. But I like it, it’s easy and more than I had initially.
Sabermetrics rules and that sounds a closer to what I was expecting. Too bad it’s not quite “quick and dirty”, but since I’ll be writing a program for it the computer can do the work for me.
Sure it is. It’s the same as the method I explained.
Express everything as a the ratio of winning vs losing, i.e. p/(1-p).
Work this out for player one vs everyone (9), and player two vs everyone (3);
divide one by the other (=3);
this is the win/lose ratio for the player one vs player two matchup,
i.e. player one is 3 times more likely to win as lose, p = 0.75
Yeah, I realized myself that the formula I gave couldn’t be right, because it would say that, if one team had a history of not winning any games, they would have a 0% chance of winning the Big Matchup. But that’s not true, because losing streaks do eventually end, and even a bad team vs. a good team has a chance of winning.
EDIT: But come to think of it, this is a problem with the other formulas people are quoting, too. There needs to be some sort of reversion to the mean included.
(1) The problem with your model is that it’s internally inconsistent.
By your model:
probability of A winning: 90/(90+75) = 55%
probability of A losing: 10/(10+25) = 29%
…which don’t add to 100%
(2) The other issue you raise is not a flaw.
If someone really has a 0% chance of winning against every player they meet, then they should have a 0% chance of winning against any one specific player.
The possibility of mean reversion is not a separate factor, it should be included in the input probabilities. If there is the possibility of mean reversion (i.e. not actually losing), then the true input probability should not be 0%.
Likewise, it’s not a flaw that if two players both have a 0% or both have 100% inputs, then the model I described has no solution. It should have no solution because those input probabilities are mutually contradictory. It’s impossible for two players both to have 0% probability or both to have 100% probability against everyone, since there must be a winner and loser in their particular matchup.
This sounds like what I think of as the Weathermans’ Forecasting problem.
One weatherman, with 90% accuracy, says it will rain tomorrow.
Another, with 75% accuracy, says it will not.
What is the chance of rain tomorrow?
The method I would probably use is:
P(P1 wins, P2 loses) = 0.9 x 0.25 = 0.225
P(P2 wins, P1 loses) = 0.75 x 0.1 = 0.075
The probability that player 1 wins is (0.225 / (0.225 + 0.075) = 75%
Yup, that’s the same calculation I did viewed slightly differently, the algebra is equivalent.
That’s why I’m a little skeptical about the link that Chingon provided giving this method a special name and attribution. I don’t think it deserves that, it’s a standard approach that’s not original or peculiar to betting odds.
Chronos, further to my post #13, I just looked back at the wording of the OP. If the inputs are the win/loss record, your objection is valid. I think the wording of the OP was not ideal.
Bookies don’t make odds based solely on win/loss record, unless it’s an extremely long record, i.e. they do price in the possibility of even the worst player winning occasionally and the best player losing occasionally. It’s never 100% or 0%.
The problem is better stated as: given the (accurate) probability for each of two players winning against average opposition, what is the probability of them winning a matchup against each other?
And, of course, they’re setting the line to try to get equal action and reacting as needed to the bets that have already been placed. The Browns in the Week 17 Browns-Steelers game last year had the lowest odds of winnings on 538 I can ever remember seeing: 5% Browns, 95% Steelers. I can’t find the Vegas spread on that game, but it had to be huge. 538’s model had at Steelers -21, and the Browns beat the spread by only losing by 4.
Suppose there are just three teams: A (the best), B (2nd best), C (everybody else). Model a contest as a fixed result subject to a normal deviation.
We are given that C must overcome 1.260 st devs (10%) to beat A, but only 0.583 st devs (28%) to beat B. A’s advantage over B is therefore 0.677 st devs (1.260 minus 0.583), so B’s chance is about 25%.
According to this simple model.
ETA: Oops. I saw “72%” not “75%” in OP. I guess this means “28%”, and not “25%” is my answer! :eek:
For definiteness, suppose that the game is bowling and that m[sub]A[/sub], m[sub]B[/sub], m[sub]C[/sub] are the mean scores of the three players. In each outing, a player’s score is modeled by a normal distribution, e.g. Score[sub]A[/sub] = m[sub]A[/sub]±σ[sub]A[/sub]
The pretense is that we do have enough information for a clear, single answer. We have two equations
Prob {Score(m[sub]A[/sub], σ[sub]A[/sub]) > Score(m[sub]C[/sub], σ[sub]C[/sub])} = 0.90
Prob {Score(m[sub]B[/sub], σ[sub]B[/sub]) > Score(m[sub]C[/sub], σ[sub]C[/sub])} = 0.75
There seem to be six unknowns. All we need do now is rid ourselves of four unknowns.
But that is easy! Replace (m[sub]A[/sub], m[sub]B[/sub], m[sub]C[/sub]) with its affine transform (t, 1, 0) where t = (m[sub]A[/sub] - m[sub]C[/sub]) / (m[sub]B[/sub] - m[sub]C[/sub]) and assume the (now scaled) σ’s are all the same:
σ[sub]A[/sub] = σ[sub]B[/sub] = σ[sub]C[/sub]
Presto: We’re left with merely two unknowns: t and σ.
Is this model fully valid? Probably not. Assuming all the σ’s are the same may be rash. Replacing the ensemble of other opponents with a single opponent C was doubtful. (Or would the simplicities of the normal distribution lead to the same answer here? I dunno, Boss; my brain stopped.) We’d really want more details about the specific game to guess at a better model.
But given that OP wants a game-independent model-independent answer, the above seems to be a straightforward way to reduce to the necessary two equations in two unknowns — and it will be valid for many simple competitions.