What is the biggest defined finite number?

If this has been done before I apologize (I think it might have).

I was watching a video on Rayo’s Number which, they suggest, is the biggest defined finite number ever.

That whooshes right over my head.

“Smaller” numbers are Busy Beaver numbers. Also whoosh but I kinda think I get it.

I can kinda grasp Tree functions but I cannot see how Busy Beaver beats Tree functions. TREE(1) = 1. TREE(2) = 3. TREE(3) = Number bigger than the whole universe.

Busy Beaver beats that? Rayo’s number I just don’t get at all.

Not sure where Ackermann functions fit in.

Rao’s number is a way to approach the following paradox, that “proves” that there aren’t any more than 256^100 positive integers in the universe.

Let S be the set of positive integers that I can defined in English 100 Ascii characters or less.

This set contains all sorts of numbers.
Examples include
“One”
“Two”
“The number of atoms in the sun”
“A google”
“Rao’s number”
etc.

Since there are no more than 256 ascii characters by combinatorics there are only 256^100 unique sets of 85 ascii characters, and most of those such as “xpsu8kduiw” don’t actually define numbers, so the size of set S can be no more than 256^100 distinct values and is probably much much less.

Let T be the set of positive integers that aren’t in S. Unless T is empty, there must be a smallest member, call it N.

But N is expressible in English with the following sentence

“The smallest positive integer that can’t be defined in English in less than 100 Ascii characters.”

which uses less than 100 Ascii characters. So N is in S, which contradicts that N was not supposed to be in S. So no such integer exists, and so all positive integers must be in S which contains at most 256^100 distinct values. So there are at most only 256^100 positive integers.

Now obviously this is false sine as we know there are an infinite number of integers. The problem with the proof is that “Described in English” is not well defined. English is a very ambiguous language and is self referential so can’t really be used to unambigously define sets of numbers.

The symbols of First order set theory can be used to define an awful lot of mathematics including all of those things like Tree, Busy beaver, or Grahm numbers or whatever. But unlike English it is not at all ambiguous or self referential, so if a given string of symbols is syntactically correct what it represents is unambiguously well defined.

Now a google symbols is an awful lot and I can write some really big numbers. For example I can use my first hundred million symbols to writing a 1,000 volume series detailing a new and unique theory of huge numbers resulting in a function F that puts every busy beaver to shame, and then use the remaining (almost google) number of symbols writing F(F(F(…F(9)…)))).

But there are only a finite number of symbols and so there are only a finite number of different non-infinite numbers I can write using a google of these symbols. So the set of such numbers is well defined. But since there is a finite number of them, there must be a largest one. So my number is the next higher integer.

Now unlike the English paradox I started with this doesn’t lead to a contradiction, because although what I wrote above does unambigously define a number, it is not (and I think could not be) written in set theory because it self references the theory itself which set theory doesn’t allow.

Also none of these numbers is remotely calculable since to do so you would have to understand all mathematics that one could write in a google characters.
So effectively Rao’s number is take the largest number that one could theoretically define if you kept writing through the age of the universe and add one.

Missed my Edit window, the last two paragraphs should read:

Now unlike the English paradox I started with this doesn’t lead to a contradiction, because although what I wrote above does unambiguously define a number, it is not and could not be written in set theory because it self references the theory itself which isn’t allowed in set theory. So this new number isn’t a member of the set that we were saying its larger than.

Note that none of these numbers is remotely calculable since to do so you would have to understand all mathematics that one could write in a google characters.
Effectively Rao’s number is to take the largest number that one could theoretically define if you kept writing perfect mathematics through the age of the universe and then adding one.

The new biggest defined finite number is the old such number + 1.

‘An array of integers, starting with 1 and incrementing by steps of 1, indefinitely’
(82 ASCII characters to write)

The idea is that each given number in that set is defined using however many characters; your example defines a set of numbers (which could just as well be done by writing ‘all integers’). This demonstrates a perhaps counterintuitive property of information (as measured by descriptive length), namely, that sets of things can contain less information than individual members of that set.

Richard’s paradox, the fact that ‘the smallest number not definable in less than eleven words’ has just been defined using ten, can—suitably formalized—also be used to establish a Gödelian incompleteness result.

Watch these two videos:

The Poincare recurrence time (IIRC the time it would take for all possible combinations of matter in our universe to occur and then repeat itself) has been calculated to be 10^10^10^10^10^1.1

Correction…that is for a universe different than ours. Our recurrence time was calculated to be 10^10^10^10^2.8

Even better: infinity - 0.00000(infinite number of zeros)1

The problem with “the smallest number not definable in less than _____” is the halting problem (which is, of course, just another manifestation of the incompleteness theorem). A defining statement is, in essence, a computer program. A computer program might halt and return a number, or it might not halt and therefore not return a number… but you can’t tell what the highest number your computer programs will return is, because you’ll always have some programs that haven’t halted yet, but which you don’t know if they might halt eventually.

For instance, it’s fairly easy to write a computer program to test the Goldbach conjecture. Start with an even number, and test all pairs of numbers that add up to that number to see if they’re prime. If you do find a pair of numbers that are both prime, then add two to your even number, and if you test all pairs that add up to your even number and none of the pairs consists of two primes, then spit out your even number and end the program.

Now, you’ve got a set of programs, including that one, and you want to know which one will return the biggest number. Will it be the Goldbach program? You don’t know, because you don’t even know if it’ll ever return a number. Most mathematicians strongly suspect that it won’t… but nobody’s ever been able to prove it.

And even if some clever mathematician someday does manage to find a proof (or disproof) of Goldbach’s conjecture, Turing proved that there will always be some other problem for which you don’t know if it’ll halt or not.