Yet another question about infinity and Aleph numbers (Will they ever end?)

Aha, I see the problem now. So that series converges to 1?

I see now, I’ve failed to include all the numbers that aren’t prime, so as Achernar said does the series converge to 1?

We can rewrite this:
   [sum over sets of n primes p[sub]1[/sub],…,p[sub]n[/sub]] (-1)[sup]n-1[/sup] / (p[sub]1[/sub]×…×p[sub]n[/sub])
     = 1/2 + 1/3 + 1/5 + … - 1/(2×3) - 1/(2×5) - … - 1/(3×5) - … + 1/(2×3×5) + …
     = 1 - [product over primes p] (1 - 1/p) .
Now let me perform some manipulations of questionable mathematical rigor without justification: Consider the inverse of this infinite product:
   [product over p] (1 - 1/p)[sup]-1[/sup]
     = [product over p] (1 + 1/p + 1/p[sup]2[/sup] + …)
a product of geometric series. Now by an extremely cute trick we can rewrite this as
     = [sum over natural numbers n > 0] 1/n
(think of the prime factorization of n and what that means about where it must fit in the expanded infinite product above). This diverges logarithmically, of course, so its inverse should be zero and the original expression should be 1.

This is not rigorous because (among other reasons) one cannot blithely reorder terms in absolutely-divergent series without some care; it should only be viewed as a hand-waving intuitive explanation until the missing rigor is added. But the same cute trick (called the Euler product formula) can be used more generally, in cases where the relevant quantities do converge:
   [sum over n>0] n[sup]-s[/sup] = [product over primes p] (1-p[sup]-s[/sup])[sup]-1[/sup]
a useful expression for the Riemann Zeta function.

Gaah. Messed up the attribution; that quote above was by Achernar, not me.

As long as I’m here again, I want to re-recommend (for anyone who hasn’t seen it before) playing with the Euler product formula until you understand it, simply for the mathematical beauty of it.

Yes, I’ve seen that notation before. Also, as ultrafilter mentioned, the cardinality of the continuum (2^aleph-naught) is commonly called c.

Yes, that’s possible (Lebesgue measure, for example). However, that doesn’t involve infinitesimals. For example, using your example of sequences of binary digits, the probability of a particular sequence in this model is exactly zero (which, as Hari Seldon mentioned, is not the same as impossible). Also, if we take B to be a countable collection of such sequences, the probability of our selection being a member of B is also zero (again, not infinitesimal), since probability functions must be countably additive. We don’t get a nonzero probability until we consider a collection C of uncountably many sequences–then the probability of our selection being a member of C may be anything from 0 to 1, depending on the nature of C. This isn’t a problem, since we don’t require probability functions to be “uncountably additive” in any sense of the word.

The problem comes when we have a countably infinite sample space. If we want a uniform probability distribution on this set, then every singleton (sub)set must have the same probability. If we decide that probability must be zero, then, by countable additivity, the probability of the entire sample space is also zero–a contradiction, since that probability must be one. On the other hand, if we decide that probability must be greater than zero, then we can use countable additivity again to show that the probability of the entire sample space diverges–again contradicting the fact that that probability must be one.

So either way you approach it, you’re screwed. The only possible way around this is to use nonstandard probability and involve infinitesimals.

The density of primes declines, albeit slowly, to 0. A random number of size N has about 1 chance in log_e(N) of being prime. It follows that the total density of primes is 0, in the same sense that the total density of even numbers is 1/2. However, there are still enough primes that the sum of all the reciprocals diverges. That is sum 1/p is infinite. But it diverges very very slowly.

To answer the last part of the OP, the alephs go on without limit, at least in most models of set theory. Although there are models in which they don’t, but this gets rather esoteric. There are mathematicians whose specialty is the study of large cardinals (that is large infinities).

FWIW, it’s possible to define subtraction and division on transfinite cardinals in such a way that you can use the same definition for finite cardinals.

a - b is defined to be the least cardinal such that b + (a - b) = a. Division is defined in a similar way.

Yes… good thought.

OTOH, then we can find P(first number in sequence is x) which must not all be equal for the same reasons as above, so presumably unsatisfactory for the OP, can’t we?

Also, I’ve got a headache trying to prove that if we did pick such a distribution, the probability of a repeat is 1. :slight_smile:

Hang on; ‘uniform’ in this context means ‘having the same probability at each point’, right?

In which case, I think a uniform distribution on sequences is much the same as assigning a distribution to each choice, such that infinitely many choices have P(n)<1 for all n.

(Since: the ‘projection to n’th choice’ function on sequences gives a random variable over a countable set, so defines a prob. distr. Also, unless all but finitely many choices are fixed, the probability of any particular sequence is zero.)

We already know that if we use the same distribution each time (say, P(n)=1/2^(n+1)), then the probability of a repeat is 1.

If we use a different distribution, we can find some such that the probability is zero. Eg. if p(choosing n on n’th time)=1. Or, if we require that the corresponding distribution on sequences is uniform, then p(n’th choice is 2n)=p(n’th choice is 2n+1)=0.5