Can you roll a die and get the same number an infinite number of times?

That one right before the infinitieth roll, of course. :smiley:

For what it’s worth, here’s how I use the term “algorithm”:

An algorithm is just a computer program (for any unthinking mechanical rule-following device, whether or not it’s one of these ubiquitous electronic ones). If you like, you can identify these with Turing machines or what have you, but it doesn’t matter.

The output of an algorithm is, well, whatever it outputs along the course of its execution. Of course, any particular bit of data output is output at some finite time. If the algorithm is supposed to give you 7 bits of data as output, at some finite point, it has emitted each of those 7 bits, and it is done. Thus, if the algorithm is supposed to produce only finitely much output, then it must finish doing so in finite time.

But perhaps the algorithm is supposed to output a whole infinite stream of output; e.g., an algorithm spitting out the digits of π, or describing the sequence of primes in order, or such things. These are, after all, natural things to write code to do. Then, by design, there will be no finite point in time at which the algorithm is done; it will run infinitely. But any particular finite portion of the output (e.g., the first 80 digits of π, or the first 25 primes, or what have you) will have been output in finite time.

The algorithm runs infinitely because the amount of output it is designed to emit is infinite, but nonetheless, information keeps streaming out of it, and combining up all the information streamed out of it over all time (each particular bit of output coming at some particular finite time after the start) gives the total desired infinite output.

Does that make sense? This isn’t an uncommon way to think of algorithms, and indeed, of computation with infinitely precise real numbers more specifically (in terms of procedures that operate on infinite streams of input data and turn them into infinite streams of output data). If you want to use the word “algorithm” more restrictively, well, knock yourself out, but I don’t see the value in quibbling over such things.

Your point is clear as I happily stipulated (with “As a matter of convention, by definition …”) in my first post on the topic. The fact is, however, that the term algorithm is defined in a specific way in computer science.

If you want to quibble that you would have defined it differently, well then … knock yourself out. :smiley:

So what if I want to see whether there’s a counterexample to the Goldbach conjecture? I could well set up a computer to just run through all even integers, and see whether they’re expressible as the sum of two primes; if not, it produces that number as output. If there is a counterexample, it will eventually halt; if not, it won’t. This strikes me as a perfectly sensible thing to have a computer to do, and to have it only implement an ‘algorithm’ if it indeed terminates seems pointless semantics to me.

Sure, but there are problems where you might not know, in advance, whether the TM halts; if you enforce some timeout limit on the algorithm, then you run the risk that you just never find the halting solution, even though there exists one, because you haven’t waited enough. Hence, algorithms that provably terminate (either by finding some actual proof of termination, or by implementing a timeout) do not compute all that can be computed via TMs.

Again, your point is clear. I often code infinite loops myself, planning to strike Control-C when it’s time for bed or such. I don’t consider the semantic issue of much interest one way or the other.

I merely decided to provide the actual definition of algorithm (according to Webster, Knuth, IIRC Turing, and others) reflexively since a non-standard definition had been given. (Usually I preface such posts with “Nitpick” — and I apologize for omitting in this case. Would that have helped?)

ETA: Note that any algorithm operating in finite memory will terminate, or repeat — i.e. NOT discover any Goldbach examples beyond some bound. 2[sup]N[/sup] is a large number of states but it is finite.)

I don’t really think “algorithm” is so highly-codified among computer scientists in practice as to exclude algorithms unendingly listing out the digits of π and such things. (Even if Knuth happened to say it at one point in one context). I think you’ll find, in practice, that computer scientists are completely happy to use the word “algorithm” more broadly.

But I agree that this is all uninteresting (or at least unimportant) nitpicking and so, yeah, whatever, I don’t have a good way to end this post, I’ll just trail off…

Yet another infinite posting.

I certainly agree that this inane subthread needs to stop.

But the snarky and ignorant “Even if Knuth happened to say it at one point in one context” riles me. :slight_smile: How many cites would I need to demonstrate that algorithms are defined as finite before you’d retract this? (Not that I’d waste my time that way, except on a (large!?) bet.)

Retract what? The claim that computer scientists often use “algorithm” to describe even unending computational processes?

I’m perfectly confident in my understanding of how computer scientists frequently speak. I’m around academic computer scientists all the time, I have a CS degree myself, my advisor’s a computer scientist (with the same number of Turing* Awards as Knuth…). I’d even call myself a computer scientist, in certain moods or contexts.

[Not, mind you, that computer scientists have some monopoly over the word “algorithm”; it was an English word centuries before anyone had ever uttered the phrase “computer science”. (Yes, it originated as a mathematical word, but I’m perfectly confident in my understanding of how mathematicians speak as well…)]

I’m not denying that finiteness plays a role in distinguishing the algorithmic from the non-algorithmic in some ways. E.g., we would not say “Check of each natural whether you can find a larger pair of twin primes, and then, after all those infinitely many checks of infinite size each, output either ‘There are unboundedly many twin primes’ or ‘The largest twin prime is…’ accordingly” describes an algorithm; none of the output it purports to yield could be produced at any finite stage. It would not be a process that could be carried out by unthinking machine.

But it would be completely uncontroversial to describe standard techniques for computing the digits of π, or listing the prime numbers, or streaming out the Fibonacci series as “algorithms”, even if run without end. I stand by that. I have complete confidence in that.

[*: Speaking of Turing, incidentally, as he was invoked before, it may be instructive to look at the paper where Turing introduced his famous Turing machines. Turing does not use the word “algorithm”, speaking instead of “computation”, “general processes”, etc. Note that the primary focus of the paper is on computations which run infinitely, printing out all the digits of some number to be calculated! Turing describes this as “calculable by finite means”, but the “finite” here is not in reference to the total running time of an algorithm, but rather, to the finiteness of its description, the resources involved in each individual step, etc.]

I think this is the problem. The probability of choosing a real number from a finite section of the number line is not zero, but is an infinitesimal with a limit of zero. No matter how small a positive number I might pick, you can always pick another, smaller positive number that better describes the probability of picking the number.

The probability cannot be equal to zero, because then, when you sum over the entire line-segment, you still get zero. 0 + 0 = 0 in all cases (trivial proof by induction.) But when you add up all the infinitesimals, you get 1.0, the probability of choosing some number.

The error being made is dividing by infinity and claiming the result is zero, when, in fact, it merely has a limit of zero. The fundamental theorem of calculus applies here.

That makes sense, and I’ll buy it. I’ve certainly written programs that didn’t end organically, but had to be interrupted, and they did exactly as you describe: output data to a sequential file.

I’ll buy that, too! This would appear to be one of those cases where a word can be used loosely, or very rigidly (O.J. Simpson was not a murderer, but a murder was committed.) Also, I have too much respect for Donald Knuth to dare disagree with him!

Note also, interestingly enough, that in his paper, Turing defines a computable function from (non-negative) integers to integers as one for which a computational process can be given which outputs the infinite stream “f(0), f(1), f(2), …” (where the numerical values are encoded in unary by that many 0 symbols, and the breaks between them encoded by single 1 symbols). So, even the most basic objects of computability theory, Turing originally conceives of not in terms of machines which take in an input, do some work, and then halt with output, but rather, in terms of machines which unendingly do work and stream out more output.

Now, for many purposes, I actually don’t think this is the best way to think about computable functions on the integers. But, it’s an illustration again that, when seeking to codify what computable processes were, Turing did not see any need to exclude ongoing endless ones (and, indeed, focused on them specifically).

Well, I’m not nearly as knowledgeable on the subject as Indistinguishable, nor as sure of my grasp of the lingo, but I do know that it’s often pointed out that there is no uniquely agreed upon formalization of the notion of algorithm, see e.g. the wikipedia page on algorithm characterizations, which points out that

[QUOTE=wikipedia]
Algorithm does not have a generally accepted formal definition.
[/QUOTE]

Anyway, to try and get back to the OP: perhaps it helps to think about it as follows. Say we have some method of throwing a die infinitely many times (if we don’t, then of course the problem doesn’t present itself). You can view each result of an infinite set of trials as an infinite string of the numbers between one and six. Consider the set of all such infinite strings: among them will be, e.g., the string 666…, as well as 111…, and so on; thus, those are perfectly legitimate results of the die throwing process. However, the probability of drawing these strings must be zero: for each real number, it is clear that the probability must be smaller, which is only fulfilled for 0.

In fact, the probability of any particular series of throws—not just the ‘regular’ ones—is likewise zero. So if we expect to get any particular result at all—if we suppose that the task is possible—then we must accept things with probability 0 happening. Whether you take this to mean that the task is impossible (which it seems to be, for a variety of physical reasons, the first one being that nobody’s got time for that), or whether you take this to mean that probability 0 events may occur, is probably more a matter of taste than anything else.

You didn’t read any of those cites did you? It is 0. A point has no width. The interval of a point, the area under that part of a curve is height * width and width is 0 for a point. These are mathematical definitions.

Impossible is something that doesn’t exist in the set of possibilities. Probability 0 is something that exists in the set of possibilities but is finite in number compared to the infinite set. A continuous geometric shape, be it a line, line segment, ray, square, cube, plane, helix etc is a set of an infinite amount of points. Each point is 0 dimension. No width, no length, no dimension. A point is not an infinitesimal.

Anyways, again page 55 of this site http://www.math.uiuc.edu/~r-ash/BPT/BPT.pdf has a lovely equation.

Basically it starts with P{r=c} = integral of (f sub R (X)dx) with the limits of integration of c and c. Well that is an integral of a point which if you solve for most any function I can think of gives you 0. It’s in multiple sources that this is how the math is done. Do you have any contradictory cites? Noone has linked any contradictory cites. It’s all been intuition and no facts.

It’s funny that factual based questions that have authoritative resources and mathematical proofs are still controversial. It’d be like answering what’s the capital of the US with NYC based on hunches and intuitions and than being being stubborn when someone linked several atlases.

This man gets it, mostly. The last sentence bring physical reality unnecessarily into math.

How do you add up a bunch of zeros and get one?

The width of a point has a limit of zero, but is not zero. Read up on real topology and neighborhoods. Re-read the fundamental theorem of calculus. If points had “zero” width, calculus wouldn’t work; you couldn’t add up the dxs to obtain a non-zero result.

Any probability that actually equals zero is impossible. Probabilities with limits of zero are what you are trying to describe: they’re as close to zero as God himself can’t tell, but they do add up to the entire dart-board. A bunch of zeroes – even Aleph-One of them – still adds up to zero.

Limit as what? What’s approaching something as the width is approaching zero?

As the number of possible outcomes of the trial approaches infinity, the probability of a specific outcome approaches zero.

The question seems to be whether this means the probability is zero, or is it simply defined as zero based on the definition of a limit. I think the whole notion of an indefinitely small probability is a mathemagical curiosity, and conveys no meaningful information regardless if it can be stated as a zero value. 1/∞ must be defined as some real number otherwise it is not and we wouldn’t be able to add or multiply it.

Returning to this, here’s an example of why one might wish to say rolling a die and getting infinitely many 3s has probability exactly zero, and not merely infinitesimally close to zero:

Let X be the probability of getting a 3 on infinitely many rolls in a row. This amounts to first rolling a 3, and then, subsequently, getting a 3 on the infinitely many rolls in a row that follow. Accordingly, we should have X = 1/6 * X.

But from this, it follows by ordinary algebra that X = 0. Not just that X is infinitesimally close to zero, but that X is zero, exactly!

Of course, there are ways one could object to this argument. You could decide, for example, that the probability of getting a 3 on infinitely many rolls in a row isn’t always the same; perhaps you want to say that the probability of getting a 3 on rolls #1, #2, #3, #4, etc. really is only 1/6 as large as the probability of getting a 3 on rolls #2, #3, #4, etc.

Or you could decide the normal rules of algebra don’t work; that something goes awry with subtraction, say, so that X = 1/6 * X doesn’t necessarily imply 5/6 * X = 0 and thus that X = 0.

There are outs of these sorts potentially available to you. But one way or another, you’ll be stuck with a wart: possibilities with probability zero, or failures of symmetry, or loss of subtraction, or some other problem. There’s no way to have everything we want at once. Alas; them’s the breaks.