Name for Irrational numbers that follow a pattern

Is there a name for irrational numbers that follow a pattern (or maybe better: a rule), like 0.10110011100011110000…. or 1.357911131517….

There’s a lot of subsets of the reals out there. Depending on what you call a “pattern” you’ll get into different subsets.

(The first example is kind of a heavily “watered down” Liouville number. They were an early proof of the existence of a specific transcendental numbers.)

As a Computer Scientist, the uppermost “numbers with patterns” category are computable numbers. I.e., numbers whose digits can be spat out by a Turing machine that could run forever since the number might have infinite digits.

You can define subsets of those based on ease of computation or other factors. One famous system is Kolmogorov complexity. I.e., how small of a Turing machine does it take to be able to write out the number. So a simple pattern like 1.111111… takes a tiny machine. There’s some non-obvious stuff about adding a constant as a non-input input to the machine to create an invariance over what programming system you use. Another gotcha for some people is you don’t really want to deal with a specific number but sets of numbers.

So, Step 1: Define what you mean by “pattern”. I await your PhD thesis.

Yeah, you can argue that pi, for instance, has a pattern. It’s just not a pattern that’s particularly obvious to humans.

Computer science people: I recall something about a way to quantify how patterned a finite, perhaps short, string of bits is. So 11111111 is pretty simple while 10110001 is less so. Do you remember how they defined it?

I think you’re talking about Kolmogorov complexity, mentioned above by ftg.

Yeah, Kolmogorov complexity. But you need a set of numbers. With just one number you run into a version of the Berry paradox.

A less formal way of thinking of this is to pick your favorite compression program. Finite numbers that compress easily have a “pattern”, ones that don’t compress well (if at all) have no “pattern” and are in some sense random.

However, compression programs don’t correspond well to what is feasible to compress in a larger sense. So pick a computational model of sufficient power and go with it. E.g., alternating, two-way, multi-head finite state automatons correspond to polynomial time (P). The encoding of the FSA that recognizes exactly one string is the representation of that string. Is it significantly smaller than the string? Aha, it has a “pattern”.

Just a bit of a problem. Determining if a string has a small FSA is definitely not poly time. Oops. (This is a standard property of any useful computational model. Interesting questions about the model can’t be answered within the model.)

It’s a double-bind. You can either make it computationally feasible or you can make it so weak it misses some reasonably computable “patterns”. Pick one.

By “pattern” or “rule” I mean something like this:
For an irrational like Pi, I can’t tell you what the 18th digit is without calculating it (or memorizing the result of someone else’s calculation I guess, but let’s ignore that). For an irrational like 1.3579111315171921… I can tell you the 18th digit is 2, because I can see how the number is panning out. (If ‘seeing’ is objectionable, I can just define that irrational as ‘1.alltheoddsafteroneinascendingorder’.) It doesn’t seem like I’m calculating anything in this case, though maybe that’s wrong and I am by some definition of ‘calculate’.

What’s the difference between “calculating” and “seeing”?

You’re assuming that the “pattern” that you see is the same as what someone else has in mind. For example, what’s the next number in the sequence

0, 1, 2, … ?

It might be 3, or it might be

2601218943565795100204903227081043611191521875016945785727541837850835631156947382240678577958130457082619920575892247259536641565162052015873791984587740832529105244690388811884123764341191951045505346658616243271940197113909845536727278537099345629855586719369774070003700430783758997420676784016967207846280629229032107161669867260548988445514257193985499448939594496064045132362140265986193073249369770477606067680670176491669403034819961881455625195592566918830825514942947596537274845624628824234526597789737740896466553992435928786212515967483220976029505696699927284670563747137533019248313587076125412683415860129447566011455420749589952563543068288634631084965650682771552996256790845235702552186222358130016700834523443236821935793184701956510729781804354173890560727428048583995919729021726612291298420516067579036232337699453964191475175567557695392233803056825308599977441675784352815913461340394604901269542028838347101363733824484506660093348484440711931292537694657354337375724772230181534032647177531984537341478674327048457983786618703257405938924215709695994630557521063203263493209220738320923356309923267504401701760572026010829288042335606643089888710297380797578013056049576342838683057190662205291174822510536697756603029574043387983471518552602805333866357139101046336419769097397432285994219837046979109956303389604675889865795711176566670039156748153115943980043625399399731203066490601325311304719028898491856203766669164468791125249193754425845895000311561682974304641142538074897281723375955380661719801404677935614793635266265683339509760000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000,

(=720!), where the nth term of the sequence is n followed by n factorial signs.

Yeah, I dunno. It seems kind of disingenuous if someone tells me they don’t understand what the nth digit of the Champernowne constant is. Like it’s a mystery…could be anything!

Yes, you’re certainly calculating the 18th digit of either number. It’s just a more complicated calculation to do it for pi. If you don’t think calculation is involved, imagine finding the millionth digit of 1.3579111315171921… Can you just “see” it, or do you have to work it out?

Another vote that you’re “seeing” vs. “calculating” concept is very poorly defined.

You need to remove yourself from the definition. Just stick to calculation. What sort of calculation are you talking about? And be aware that it is a lot harder than it seems to limit things to “practical” computations.

Oh, and:

There’s actually one category larger than this, the definable numbers. Any computable number can be defined by its computation, but some numbers can be defined but not computed (most notably, the asymptotic halting probability of any given Turing machine).

What’s a bit trippy, though, is that even the set of all definable numbers is still only what’s called a countable infinity. That is to say, there’s a one-to-one correspondence between the definable numbers and the integers. The set of all real numbers is far, far larger. Or put another way, the vast majority of all real numbers can be neither computed nor defined.

I wasn’t going to get into Turing jumps and such given the OP. Very theoretical but it provided the framework for the poly time hierarchy, which has more practical interest, including broader concepts of compression which is relevant to computationally recognizing “patterns”.

Defining “patterns” non-computationally within Mathematics is not something I would know much about.

Irrational numbers with no infinite pattern whatsoever are called Normal numbers. So the new name you need to coin is “Abnormal irrational number.”

That might or might not fit what the OP is thinking of—which they haven’t rigorously defined and possibly can’t. Would something like pi but with every 2 changed to a 5 be an “abnormal irrational number” without having what the OP considers to be a “pattern”?

Could you clarify this “no infinite pattern” term, please.

The article you link to has examples of normal numbers that definitely have a pattern that goes on forever and doesn’t repeat. But that doesn’t seem to correspond to the term you are using.

When you get into the realm of computers, you have to be VERY careful with numbers.

If you are doing integer math and you take the number 5, divide it by 20, and multiply it by 40, you might be expecting the number 10 as a result, but you’ll end up with a result of 0 (because 5 divided by 20 gets rounded to 0 as an integer, then 0 x 40 = 0),

Computers can’t handle irrational numbers at all. So the only way to make computers deal with irrational numbers is to have special programming to deal with them.

Instead, what computers usually deal with are either integers or floating point numbers. But this can also lead to issues. Back in the Gulf War in the early 1990s, they found that the Patriot Missile Systems became less and less accurate the longer they were left “on”. This bug ended up being the result of an irrational number. And, curiously, it’s a number that is irrational in binary but NOT irrational in decimal. The Patriot system used 1/10th of a second as its time base, which is 0.1 in decimal and is perfectly rational. But in binary, 1/10 is 0.000110011001100110011… It’s an infinitely repeating pattern. If you store that as a floating point value you get pretty close to that value (the Patriot stored the first 24 bits of that), but it’s not exact, so the longer you count using that floating point value, the further off your floating point count is from the actual mathematical count, which is what caused the Patriot’s inaccuracy. If you left the Patriot running for about 100 hours, the timing would be off by about 1/3 of a second, which doesn’t sound like much, but when it’s predicting where an incoming missile will be in the next scan it’s off by enough that the missile isn’t where the Patriot expected it to be, the Patriot lost its lock, and the missile was not intercepted.

Nitpick (it’s the Dope, after all): You don’t mean “irrational” here, you mean something like “representable exactly in limited precision floating point.” Whether a number is rational or not does not depend on the base in which you write it.

How is that proven?