Well, for one thing, it’s because you’re wrong about it.
Here’s one reason: define an algorithm by which you will select a real number.
Let’s say it’s going to be between zero and one. In decimal terms, it’s easy: grab a ten-sided die and roll it, for the first digit. {0…9}. Now roll again for the second digit. Now for the third…
Keep rolling…forever.
i.e., you can’t do it. There is no algorithm to choose a real number. The probability is zero…and the process doesn’t terminate. It’s the same problem.
Sounds like there IS an algorithm to choose a real number: the one you described! It just streams the result to you over time. Which is all you could hope to do, to communicate a real number to anyone: stream information about it to them over time.
If we abandoned the language of probability and spoke only of area, or weight, or what have you, would there still be the squabbles that this thread invokes? Because, mathematically, there’s no difference; the formalism of probability is just the same as for any other such measure. If the thread is about the way mathematicians (e.g., Vi Hart of the OP) use the language of probability, that perspective may be worth noting.
I’m right and have linked sources. If you are certain I’m wrong, link me some authoritative cites on the subject that contradict what I’ve said. Until then, what you are saying is merely opinion.
Anyhow here are some more cites.
Page references are as printed on the page and not the pdf page count.
I’m not scanning my own probability texts and uploading them but I think you see the pattern.
And algorithms for real life processes are** completely irrelevant** for mathematical descriptions. Reality being discrete or continuous has no bearing on the way math works.
It’s not pointless if people are interested in learning about probability. Just because people don’t want to accept textbook definitions of settled math shouldn’t discourage those who have a genuine desire to learn.
The problem occurs, like with the 0.9999…=1 debate and the Monty Python goat and car problem, when people’s intuition or gut feeling is contradicted by the math. And I’m not trying to pick. It seems to be a pretty common human trait. The goat problem causes all sorts of issues but it’s a simple problem. Pick the other door and have a 2/3 chance instead of a 1/3 chance.
Yes, it is; in fact, whether a given algorithm terminates or not is one of the most famous questions in computer science—the so-called halting problem. Turing proved that this problem is undecidable, in the sense that there is no algorithm that can, for any given algorithm, tell whether the latter terminates. Algorithms need to be finitely defined, but they don’t need to terminate in finite time—just imagine an algorithm that runs into an infinite loop (such as the old joke about the computer scientist who gets trapped forever in the shower because the instructions on the shampoo bottle read ‘lather, rinse, repeat’).
This gives rise to the notion of computable real number: a real number is computable, if there exists an algorithm that successively enumerates all its digits. Most real numbers, however, aren’t computable: there are only as many different algorithms as there are natural numbers, but vastly more reals. In fact, the probability of randomly choosing a computable number—if you have some method of choosing arbitrary real numbers—is 0!
(This assumes that an algorithm is something deterministic, i.e. whenever you run the same algorithm, you get the same number out; if you want to introduce an element of random choice—which you well might—, then any real number can be produced. But then, you’re really just outputting the random choice you had to make in the first place, so that doesn’t get you any further regarding ‘choosing an arbitrary real number’.)
Doesn’t the number of times a dice throws up the same number depend on how random the throwing method is? Presumably I could make a machine that could throw the dice exactly the same way each time?
There will still be squabbles as long as the math uses this fictional world with infinitely dividable areas or that allows for infinite numbers of rolls and such. The idea just doesn’t make sense if you try to come at it from the world of the finite.
There are a finite number of particles in that dart board. There are a finite number of rolls that can be made. The probability of 0 not meaning impossible is really just is a paradox of infinities that mathematicians have defined away to be able to continue to make useful predictions. In that regard, it’s not really different than saying 1 + 2 + 3 + 4… = -1/12.
It’s just the line/points paradox all over again. Points are infinitely small, yet they somehow can combine to make a line segment. They have zero measure, yet can be combined together to make a measure. 0+0+0+0…=1, essentially.
But, in the real world, that line segment is made up of a finite number of atoms, which are made up of a finite number of subatomic particles. And those points have a measure–it’s just really small.
And I would say that the probability of the atoms of the dart hitting a certain set of atoms on the board has an actual probability greater than 0, just like any result of a finite number of dice rolls will have a probability greater than 0 of occurring.
In other words, infinity is the problem, not anything else.
What is the probability for a thrown dart to hit the point that it hit? That must be 100%, because no matter how many times I throw the dart, that statement will always be true. What was the prior probability of hitting that point? It cannot be defined. The prior probability would be the probability I would have calculated if, before throwing the dart, I had asked what the probability was of hitting that particular point. But I had no way of asking that question before I threw the dart, because I had no way of specifying that particular point.
As a matter of convention, by definition an algorithm terminates in finitely many steps. Merriam-Webster will confirm this! Another authority is Donald Ervin Knuth, professor emeritus of computer science, who shows “Finiteness: Terminates after a finite number of steps” as the first of five definitional properties of algorithm.
So you’re dividing by infinity? I honestly thought one could only divide by real numbers. I’m not saying your logic is wrong, but I am saying your initial premise is unrealistic. This way of calculating probability gives meaningless results. We cannot throw the dart an infinite numbers of times.
Thanx for the correction, a point is indeed a 0-dimensional object, my bad there. I was getting upset around then and sometimes [sic] I get hasty when I’m upset.
Rolling an ideal die forever, and with true randomness, the probability quickly drops to 0, despite the fact that with each individual roll, the chances of rolling the same number is 1:6.
And that’s what this is really about, true randomness and probability over infinite rolls. But true randomness is “clumpy”. If you were to give each number a shade of gray, then map the results in a grid, it’d look something like this.
Now, imagine that same image all white and infinite in size. Nature just doesn’t work that way.
If it is impossible to roll an infinite number of, say, sixes on a six-sided dice(given adamantium dice, Remo William’s right wrist and an eternity to waste), what precisely is it that forces the dice to not roll that last six?
Does it magically turn into a five-sided dice? Does the number six turn into an extra number four? Does an Act O’ Ghod make the dice explode?
Right, except I think we have to be careful about what we mean by “the probability quickly drops to 0,” since I think that’s what the OP was asking about.
In the idealized case you describe, Does the probability equal 0 after any finite number of rolls? No.
Does the probability become very very close to 0 after only a relatively few rolls? Yes (for most reasonable values of “very very close” and “relatively few”).
Does the probability approach 0 as a limit as the number of trials approaches infinity? Yes. Chisquirrel’s analysis in Post #65 is essentially correct.
Does the probability equal 0 after an infinite number of rolls? What do you mean by “after an infinite number of rolls”? It can be an informal shorthand way of saying “the limit as the number n of trials approaches infinity”—in which case, see above. If that’s not what you mean, then the answer to the question, or even whether the question makes sense, depends on what you do mean.
100% agreed. I concede using the word quickly may not be appropriate.
Is the curve of probability of rolling the same number exponential toward reaching 0 in the real world (yet asymptotic; unless truly mathematic and accounting for infinity)?
As far as I know, ‘algorithm’ isn’t really a completely formally defined notion. That particular definition (which I wasn’t aware of, so thanks for pointing it out) is a little odd, though: it means that in practice, you can’t tell in general whether something is an algorithm, since you can’t tell whether the procedure terminates. Additionally, if something is limited to executing only terminating procedures, it’s computationally weaker than a Turing machine (of course, in practice, all computers are effectively only finite state machines, so this might not mean much).
It also has the amusing consequence that a procedure that tries to find an even number that isn’t the sum of two primes is an algorithm only if the Goldbach conjecture is false, and similar cases. What good is a definition that in general doesn’t allow you to decide whether some given procedure is an algorithm?
If you read the cites I linked you will see that none of that is true. I even referenced the relevant pages.
Just because something happened doesn’t change its probability. Flip a coin. Get tails? That’s a 1/2 chance. It’s 1/2 if it happens. It’s 1/2 if it doesn’t happen. Flip a coin twice. Get the sequence TH. The probability of that sequence is 1/4 regardless of that sequence occurring or not. The probability does NOT change.
You don’t have to specify the point by an arbitrary name to do the math. Hell, call it 0’, call it x, call it potatoe. The label is 100% irrelevant. It’s still a single point divided by a set of infinite points which according to every source on probability that addresses this question is probability 0 AND entirely possible. I seriously doubt that the folks at MIT and the other institutions’ texts that I linked don’t know their math. This is isn’t a political debate where I’m citing a women’s studies professor on a subjective matter. This is solved math and as close to a fact as one can get.
I know you have a superior math education than mine. I can’t do tensor calculus. Never got around to learning it. But this probability 0 stuff is high school calculus. And it’s not even challenging. The texts are clear. But if you are correct provide a contrary authoritative cite.
Quite to the contrary, algorithms are designed by professional computer programmers and their termination in finite time is an essential property. If the programmer is uncertain he should force eventual termination, e.g. via timeouts.)
Note that finite algorithms solve the same problems that Turing machines do by definition: A Turing machine’s answer is read when it halts.