In a current thread on a particular problem in of probability approaching (heh) “0.999=1” caliber, the roll of a dice under a particular set of (argued) conditions reveals a probability of 1/6 that a particular die will show.
A bit of snark to good to pass up is from Musicat, post #115:
I answer him by saying, “Well, in fact I do,” and after that “another 10 million please, because the results have not proven anything because perhaps it will.” Surely this has way of thinking has been named and analyzed since forever.
Can someone tell me what an answer to my cavil is? I suspect it has its own vocabulary in philosophical analogues (black swans?) and computer science.
Also, can the case be made that the disciplines of probability and statistics divide over how this problem (odds of a single number on a fair die) is approached and answered?
Statistics itself is a discipline that is part of mathematics. It is composed of proofs, just as much as algebra or calculus. I remember being fascinated to learn how the various formulas were derived from first principles. (As an analogy, E=MC[sup]2[/sup] can be derived from F=MA, though that wasn’t obvious before someone first thought of it.)
The reasoning behind sampling is part of a basic statistics course. The philosophical principle is merely that a good sample can be used to represent the whole (“the universe” in stat talk) to a fairly accurate precision. The size of the sample regulates that precision. (So does the goodness, but that is engineering rather than math.) That’s why small good samples turn out to work extremely well for virtually every application and why increasing sample size adds less and less information despite the added work load.
It’s like asking why “0.999~=1” (you left out the tilda, which changes everything). It does so because someone conceived of and proved the concept of a limit, something that has enormous value in multiple fields of math. Statistics are of value because they have been defined and proved in the same way, not because of a philosophy.
Just found this, a 9-page excerpt from a chapter in a History of Mathematics (text)book (http://www.eolss.net/Sample-Chapters/C02/E6-132-37.pdf)
THE HISTORY AND CONCEPT OF MATHEMATICAL PROOF Steven G. Krantz
American Institute of Mathematics, Palo Alto, California 94306 U.S.A.
Keywords: Proof, axiom, postulate, definition, rigor, deduction, intuitionism, computer proof.
The Concept of Proof
What Does a Proof Consist Of?
The Purpose of Proof
The History of Mathematical Proof
5.2. Eudoxus and the Concept of Theorem
5.3. Euclid the Geometer
The Middle Ages
The Golden Age of the Nineteenth Century
Hilbert and the Twentieth Century
8.1. L. E. J. Brouwer and Proof by Contradiction
8.2. Errett Bishop and Constructive Analysis
8.3. Nicolas Bourbaki
9.1. The Difference between Mathematics and Computer Science
9.2. How a Computer Can Search a Set of Axioms for the Statement and Proof of a New Theorem
10.1. Why Proofs are Important
10.2. What Will Be Considered a Proof in 100 Years?
Got my reading cut out for me. Not sure where statistical methods and evidence enter the picture in that list however.
Most of the time, when people express anything like a distrust of statistics, it’s for some reason having to do with some situation where statistics are being applied by an opponent, and the result is unsatisfactory.
Since I also have the sense that misuse and misapplication of statistics might well be even more common than people lying to get sex, I can understand the reticence.
Perhaps WHY you have concerns or doubts, would help us to guide you to a more useful resolution than simply trying to further test a particular statistic.
If you want to see what the mathematical theory of statistics looks like, I found this book in PDF form. (Warning: it involves some very high level mathematics. I certainly don’t expect anyone here to read and understand it, as that would involve both considerable math background and considerable study.)
If you want the deep end of the pool from the philosophical end, start here Epistemology - Wikipedia IOW, how do we know what we know and how do we define, develop, and measure our confidence in that knowledge?
Come back in a couple years once you’ve figured it out. Then explain it to me please.
Note the original dice thread is beginning to take a slight turn this way at the bottom of page 7. There are some links there worth reading too.
Gnosis, my friend, gnosis.
Are not arithmetic and certain other kindred arts pure sciences, without regard to practical application, which merely furnish knowledge?
Yes, they are.
But the science possessed by the arts relating to carpentering and to handicraft in general is inherent in their application, and with its aid they create objects which did not previously exist.
To be sure.
In this way, then, divide all science into two arts, calling the one practical, and the other purely intellectual.
Let us assume that all science is one and that these are its two forms.
Plato, The Statesman, 285:d-e
ETA: yeah, “Epistemology” too. (LSL ninja.) But I suggest “gnosis” as defined above as SDGQ watchword.
Unfortunately for Plato, both science and mathematics have both pure and applied branches and both create objects which did not previously exist. They are two sides of a single coin and not merely inseparable but are essential to the others’ structural integrity.
For the specific case cited in the OP, there are rigorous methods for calculating the statistical uncertainty on the figures derived from simulation. When I post simulation-based results on the SDMB, I always[sup]*[/sup] include the corresponding statistical uncertainty.
The sorts of questions that appear on the SDMB often involve Bernoulli processes, and the statistical treatment of such is well developed. In the absence of a clearly applicable statistical model, one can always calculate the statistical uncertainty on simulated information though… simulation. But this is computationally expensive and it’s often better to use brains over computational braun (although braun is certainly a handy approach in the right circumstances).
[sup]*[/sup] “Always” is a scary word. I can’t recall ever not doing it, but maybe in haste I have skipped doing so?
Thank you Pasta for answering the exact OP, although I must say I have had fun learning about mathematical limits in their various guises (1-year calculus in high-school as good as nothing), as suggested in other posts. Unsurprisingly, then, I can’t get far into that article without giving up, but like I said in my addendum to OP, it’s nice to know it’s there, named.
And, as a nitpick, I, for one, shave with a Braun, as do others, but I don’t think they make that particular kind of appliance you mention for computer simulations.
ETA: on re-read, you cite Bernoulli for "sd-type questions, but not for “the various” problems in your opening paragraph. There must be many ways to spin that OP cat, you’re saying? And Bernoulli is one of them, the most generally applicable?
Bernoulli events are things like tossing dice (fair or loaded), where each trial is independent of the others and the broad factors influencing each trial are consistent.
An example of something that would NOT be a Bernoulli process would be tracking the results of administering the same political poll night after night to the same people after they’ve watched whatever happened on the news that day.
The poll results will jump around some due to random noise in a Bernoulli-like fashion. But they’ll also be reacting to the ever-changing history and the ever changing news. Which is where Bernoulli gives up and the related math becomes inapplicable.
For lots of real-world interesting physics stuff and for almost all brain teasers and gambling questions Bernoulli is fully applicable. And those are the sorts of probabilistic Qs we get lots of on SD.
What 538 and the rest are doing over in Elections is something very different. Even though they’re also talking about odds or probabilities of who’s going to win.
I’m not 100% sure what you’re asking, so I’ll answer something and see if I hit the mark.
Bernoulli process: Some task has two possible outcomes, and you attempt the task over and over, with each attempt independent of (but otherwise identical in setup to) the others. The two possible outcomes could be “I picked a red ball” vs. “I picked a blue ball” (if those are the only two possible outcomes). Sometimes there is one salient outcome and then everything else combined makes up the other outcome, like “I drew an ace from the deck” vs. “I did not draw an ace from the deck”.
The dice question that prompted your thread here is of this type, since the other die is either a six or it isn’t (two possibilities). This is a common format for bar-bet probability questions. (Monte Hall’s door either hides a car or it doesn’t. The other kid either is a boy or she isn’t.)