Games of chance have existed for thousands of years–well before people understood statistics and probabilities as a mathematical science. And back then, imperfect materials would be used in these games, which would naturally have certain biases for certain outcomes. For example, playing dice with irregular bones would not be random, as the shape and weight distribution of the bone would mean certain outcomes would be more common than others. So then when playing these ancient games of chance, did the players understand the odds in some sense when placing their bets? Did they realize that one side comes up much more often, so it would be better to bet on that outcome? Or did they think all outcomes were equally likely, and it was just up to the gods as to what happened? And even if a savvy player figured out the biases for one set of sheep bones, the next game may use different bones with different outcomes.
This articleis primarily about Thomas Bayes, an important early figure in the development of probability theory. However, it contains references to other early pioneers so may provide a bit of a starter in hunting out useful writing on the subject.
In that article, it says:
It’s surprising to me that ancient mathematicians would not have come up with something like probability tables for gambling games. They were so observant of patterns in everything from the Earth to the stars that it seems natural for them to have also looked for patterns in how some bones rolled.
It’s also strange considering modern people look for patterns even when they should know they aren’t there. Like, many people in Las Vegas think they can figure out the next roll of the dice or roulette wheel based on past results. So even in the case where the outcome is almost perfectly random, people try to make predictive guesses that the next outcome will be X because the past outcome was Y. While that kind of logic is faulty in Las Vegas, it’s actually pretty relevant in the ancient world. A hand-carved die will likely have preferred outcomes, so past outcomes would be relevant to future results.
I love the story of William Sealy Gosset. He was a statistician at the Guinness Brewery in Dublin around 110 years ago. He was using the Gaussian distribution to select the best yielding varieties of barley, but the predictions never seemed to match the results when the sample size was very small. So he came up with a new distribution - the Student’s t-distribution - that works well with small sample sizes.
An early form of dice were the knucklebones of animals, called tali or astralagi. These are irregular cuboids, with four flat surfaces and two rounded ones. When you throw them they don’t land on the rounded sides, only on the flat ones. The irregular shape meant that it was more likely to land on one side than another. The ancients knew this, and assigned different points value to each side, 1, 3, 4 or 6 points, with the less likely result scoring more points.
In many gambling games, some bets are riskier than others. If you bet on them, you are more likely to lose, but the potential prize is hi9gher. This was true in the ancient world just as much as the modern one.
How close were those number assignments to their true probability of happening? That is, with a 14 point total, did the side with 1 come up 1/14th of the time, the 3 come up 3/14th of the time, etc?
I doubt they did 100,000 trials to determine the exact proportions. But after a few hundred throws, it’s a good ballpark estimate.
And, no doubt, one bone would throw a 3 slightly more often than another.
I hate to use a cliche like paradigm shift but that’s pretty much what it was. The classical school of mathematics was based around the idea of proofs. Mathematics was about either proving something true or proving it false. The underlying premise was that if something lacked that kind of precision then it wasn’t really mathematics.
Look at the way early mathematicians had problems with irrational numbers. The idea that a number might not have a precise value was met with a lot of skepticism.
Probabilities faced the same problem. Mathematics was when you added up six and seven and got the sum of thirteen. Probability was about when you rolled two dice and asked if the resulting sum was more likely to be a six or a seven. A classical mathematician might tell you that the sum could be either answer - or neither - and therefore it wasn’t a mathematical problem.
You also have to understand, modern understandings of probability model the world as impartial physical systems which interact in complex ways to produce defined results. In the ancient world, items of chance were seen as divine intervention.
When we roll a dice, we imagine a series of atoms jostled via physical forces to eventually result in a dice roll. There’s an understanding that rolling the dice multiple times will result in a predictable distribution of outcomes. For the ancients, it was a god/gods deciding the fate of the outcome via some inscrutable will or desire. Who knows how past rolls of the dice impact future rolls of the dice? There wasn’t the same conceptual framework where the theory of probability even makes sense.
It doesn’t matter whether they do a few hundred trials or 100,000 trials (or even a million); they’re still not going to get the “exact” proportions. Though as you said, a distribution based on a few hundred might be good enough for gaming purposes.
Even with modern knowledge of probability theory, most people are terrible gamblers. Bad probability estimates (by modern standards) wouldn’t make much difference to a lot of people today, much less several centuries ago.
Or, for most pre-modern people, a theoretically predictable but pragmatically unknowable result of a huge number of complicated planetary and other influences interpreted by astrology and other systems of divination. It’s no coincidence that rigorous mathematical study of probability was starting about the same time that scientific acceptance of astrological theories was definitively ending.
(One reason sometimes given for slowness in developing simple math for dice games is that early dice were so irregular that a (1/6, 1/6, 1/6, 1/6, 1/6, 1/6) model was useless!)
I have also always been surprised at how slow the development of probability theory was. The difficulty of probability continues to the present: many very competent at algebra or trig have difficulty with simple probability problems. (For some reason, I’m the opposite: no good at trig but I can usually tackle simple problems in probability.) Jean d’Alembert is considered one of the greatest 18th-century mathematicians, but according to Wikipedia he thought flipping a coin to heads was more likely after a string of tails.
Historians search almost in vain for evidence of probability theory prior to Cardano. Three early hints at probability are:
(1) Combinatorial enumerations. Counting is essential to much probability work, and early mathematicians could do combinatorics. Pascal’s triangle, for example, was known long before Pascal — one of the early discoverers was the poet Omar Khayyam!
(2) Arithmetic mean, possibly weighted. Calculating expected values is important in statistics. The ancient Greeks wrote about the arithmetic mean of two values, but weighted means and taking the mean of three or more values seem to be rather recent ideas. Ancient Hindu mathematicians apparently did use weighted means of multiple values, e.g. to estimate volumes.
(3) Frequency analysis for code-breaking. Code-breaking was an important practical task in the ancient world. The 9th-century Iraqi polymath Al-Kindi was able to defeat substitution ciphers using the same probabilistic analysis Edgar Allan Poe describes in “The Gold Bug.” Al-Kindi’s work must be little-known since the biography linked above doesn’t mention it, though that MacTutor site is usually excellent.
Any other “precursors” of probability or statistics that I’ve missed?
To set it in context, the incident which is generally credited with leading to the development of probability theory occurred in 1654. There was a gambling game being played in France where you rolled a pair of dice twenty-four times. If a pair of sixes came up on any of the rolls, you doubled your bet. If it didn’t, you lost.
This is a pretty elementary probability question. But in 1654, nobody knew how to approach it to figure out if the odds were fair or not. (Spoiler: they’re not.) But at the time, it was generally accepted that even odds was a fair payout.
A nobleman, the Chevalier de Mere, wanted to confirm the odds. He contacted two of the best mathematicians of the era, Pierre de Fermat and Blaise Pascal. Fermat and Pascal began a correspondence over this problem which led to the development of probability theory. For the first time, it became possible to figure out the odds.
Pretty close to fair odds, I’d say-- 50.9/49.1, by my calculation. About as close to even money as you’re going to get with something that complicated.
But doing more gets you closer, according to the Law of Large Numbers (which is interesting because it’s an example of a proved theorem about probability).
If this wasn’t true, no one would ever buy a lottery ticket. Gamblers fool themselves all the time even though they ‘say’ they understand that the house always wins.
Your calculations are correct, but the odds are far enough from 50/50 that Chevalier de Méré was losing money consistently enough that he asked Pascal and Fermat why.
Sure, but not exactly even, which makes all the difference in this case.
By the accepted ‘table’ (not really a table) of the time, the “problem of points” had a given solution. Fermat and Pascal were each independently able to show it was wrong (Pascal using his eponymous Triangle for it) and came up with the correct solution - and more or less the field of probability theory.
Not really true.
I do know the odds and still buy the occasional lottery ticket - without expecting to actually win, of course. It’s a terrible investment strategy but the occasional dollar spent on it is fine entertainment spending (for me) and much less than I spend on other discretionary purchases. Better yet - it has the highest probability of making me $100M, not that the probability is all that high in the first place.
To the extent it is true, it is valid for people who expect it to produce reliable income or spend food/rent money on it.
I just realized I probably listed the odds backwards earlier. I’m guessing they wanted the most points, so it probably should have been 13/14% chance of a 1, 11/14% chance of a 3, etc.
Those bones sound like the modern game Pass the Pigs. In this game, you role plastic pigs and get different points depending on how the pigs end up (on their legs, nose, side, etc).