Name for Irrational numbers that follow a pattern

It’s not too hard, actually. Any definition of a number must be some sort of finite string, based on some finite alphabet (ASCII or Unicode or whatever). And you can put the set of finite strings into a 1:1 correspondence with the integers, by making a list of them: First, list all of the 1-character strings in order, then all of the 2-character strings, and so on, and the first string on your list corresponds to 1, the second string corresponds to 2, etc.

Of course, not all finite strings define numbers (and in fact, for some, you don’t know whether they define numbers or not), so the number of definable numbers is less than or equal to the number of integers. But all integers are, themselves, definable numbers, so the number of definable numbers is greater than or equal to the number of integers. And if it’s both >= and <=, then the size of the two sets must be equal.

Look at Chronos’s reply.

So, what you are saying is, if one programs a computer to deal with irrational numbers, it will be able to deal with irrational numbers? :slight_smile:

Certainly there are packages to do arithmetic in things like number fields. Of course, that is not at all the same as saying a computer can do much with uncomputable or random numbers.

Well ISTM that any formula defining a number can be represented by a finite string of symbols, therefore there are only so many of those. That is what @Chronos explained.

I assume that, for instance, programmers do not try to do things like test whether two floating-point numbers are “equal” by comparing their binary representation. Or assume that “equality” is transitive. And so on. I mean, there are obvious pitfalls, so I suppose someone trying to DIY everything who has never taken a computer science course might screw it up.

I am talking about what the computer hardware can do vs. what some creative programming can do. The CPU itself only has instructions for individual bits, integers, floating point numbers, and character strings (REP MOVSB for example is a single x86 instruction that will move a string of bytes from one location to another). Those math packages you mention can handle more complex data types and add a lot of functions that the CPU does not natively support, but they do it by creatively using integers, floats, and strings to do so.

For example, a computer has no concept of what a complex number is. But you could easily create a structure using two double-precision floating point values, one for the “real” part of the complex number and one for the “imaginary” part. But then if you want to add two complex vectors together, you have to write the adding routine yourself too. You end up with a program that can handle complex numbers, but the computer hardware is doing it all with simple floating point values.

Sometimes they do, which often leads to problems. To do a proper check though requires you to write a function to do so (or use a pre-existing function from a math package or some other source). Typically the function will compare the two values with some tolerance for how close they have to be to each other to be “equal” (if (abs(x-y)<epsilon) return TRUE else return FALSE is a typical implementation).

And yes, it is something that a newbie programmer can easily screw up. (if(x==y) return TRUE else return FALSE) is a valid program statement and the compiler will happily accept it.

Some CPUs don’t even have floating point instructions, and such operations must be performed by software. That is much slower than having floating point support in the silicon, but at least it’s easier to fix if there’s a bug. And I would say that most CPU designs do not have character string operations. The x86 architecture is unusual in having built-in string instructions.

An infinitely repeating pattern is rational. And rationality does not depend on the base. What you mean is that binary expansion of 1/10 does not terminate.

It is perfectly possible to program a computer to do exact rational arithmetic. What you have to do is store a rational number as a ratio of two integers. Of course, you might need a lot of storage. \frac1{1000}+\frac1{1001}=\frac{2001}{1001000} for example.

Asking whether two floating point numbers are equal is almost always a mistake: You probably want to instead ask if one is less than the other. But if you do actually want to ask about equality, usually what you do is subtract the two, and compare the difference to some suitably-small number like 10^-6 or something.

It is often more nuanced. It is context dependent. It is more a value that depends how many bits of the mantissa you consider significant and that depends on the mathematics of what you are doing and the stability of it. These questions are where the rubber hits the road for serious numerical analysis. For example you might want to terminate an algorithm when it has converged on a solution. Choosing what metric to use to determine convergence is an art.

There is also the concept of a machine epsilon. \epsilon_{mach} that can be convenient start. Conventionally it is the delta to the next largest floating point number from 1.0. Which obviously depends on the representation used. Effectively it is the least significant bit of the mantissa. The idea can be applied to the magnitude of the things being compared, and you may then be safe. Not as rigorous as actually doing the proper analysis. But a reasonably sensible lower bound.

If you code in JavaScript and its brethren of cursed languages, there is no such type as an integer. That makes for a whole new world of numerical representation problems in the opposite direction.

I’m not a computer science person, but I took a course in information theory during my long ago degree, and there was something about entropy representing the amount of information in data. The main thing I remember was that paradoxically, the more random a number, the more information it contained, because the next digit was less predictable from the previous ones.

Don’t know if this is the same thing as Kolmogorov complexity.

Right, to be clear, you do compare the difference to a suitably-small number, but what’s “suitably small” depends on the context, and it can sometimes be nontrivial to figure out what that “suitably small” value is.

Well, there are going to be some fine nuances in the definitions, but it’s at least very closely related.

π contains very little information. A smallish program can spit out as many digits as you have the computing power and time to waste. It is an incredibly non-random number.

Suppose you have a large document. Say the Encyclopedia Britannica. A recent version is 4.2GB. But it compresses to 132 MB. That means you can find perhaps 11 other similar corpuses that compress well and concatenate their compressed versions to create a larger set of knowledge about the same size as the uncompressed EB.

Note that compressed files start to look more and more randomish the better you compress them. So that compressed super-corpus will measure more random in terms Kolmogorov complexity.

Chaitin’s constant is an example of a number that is random per Kolmogorov complexity but has an amazing amount of information in it. But A: it is not remotely computable. B. Even if you had an oracle give you a large chunk of it, it would still take too much time to use it to answer any non-trivial question and C. Kolmogorov complexity, as noted above, is based on “merely” the first step of a ladder of increasing concepts of information theory complexity.

A random source has information entropy; for example, if you flip a fair coin to generate bits then each sample gives you \log_2(2)=1 bit of information.

We should also note that if you randomly produce, say, 8-bit strings like that, then all of them are equally probable, 00000000 just as much as 11111001; each one tells you 8 bits of information.

What I was thinking of in my question was not even Kolmogorov complexity or information content per se, just some minor note where a couple of mathematicians wanted to consider whether a given string “looks random”, versus something like 10101010. To that end, obviously I do not remember, perhaps something like they defined some type of group action on the set of words, examination of which would reveal which ones were more “symmetrical”.