Rayo’s number, the biggest one, is easiest to check: using a fixed number of symbols (10[sup]100[/sup]), one can describe only finitely many numbers, so there is an upper bound.
Loader’s number IIRC relies on the fact that every term in a particular typed lambda calculus is strongly normalizing; in other words we essentially have a programming language in which every program terminates, so Loader’s program can simply loop over all programs up to a certain length and collect the results.
TREE(3) follows from Kruskal’s tree theorem that the set of all trees is quasi-well ordered by the rule that T ≤ T’ if and only if T can be homeomorphically embedded into T’. (Labelled) trees being well-quasi-ordered means that for any infinite sequence T[sub]1[/sub], T[sub]2[/sub], … there is some i < j for which T[sub]i[/sub] ≤ T[sub]j[/sub]. This implies that given, say, 3 labels, there is an evidently ludicrously long, but still finite, maximal length of a sequence of rooted, labelled trees where T[sub]i[/sub] has at most i vertices and no tree embeds into any tree later in the sequence. There is a short proof of Kruskal’s tree theorem by Crispin Nash-Williams which starts by choosing a bad sequence of trees with T[sub]1[/sub], then T[sub]2[/sub], T[sub]3[/sub], … having the minimal number of vertices, then showing this leads to a contradiction.
I don’t see how; it’s not like you are exponentiating anything. In theory you could run a program to compute the number, but you cannot actually successfully run such a program on any actual computer.
Is it even possible to describe TREE(3) with any known kind of mathematical notation? You can do this with Graham’s number using Conway chained arrow notation:
g(1) = 3->3->4, g(N) = 3->3->g(N-1), G = g(64)
But how can we even know how big TREE(3) is in comparison to other large numbers?
The “Googolology Wiki” states that Adam P. Goucher has demonstrated that TREE(n) grows at least as fast as f[sub]ϑ(Ω[sup]ω[/sup]ω)/sub in the fast-growing hierarchy (he alludes to constructions by Friedman of long lists of trees) – I haven’t gone through any of this carefully, but if we accept the general principle then given a certain degree of effort it is possible to put at least rough bounds on some of these functions and compare their growth rates. Graham’s number is less than f[sub]ω+1/sub, for example.
I always considered than the main result of many papers I wrote wasn’t the new facts in the paper, but the new methods developed to establish those facts.
A lot of “not practical” Mathematical techniques have been used to establish quite practical results. Take Cantor’s diagonalization. What’s the practical value of knowing that there are more reals than integers?
But diagonalization has been used to establish the complexity of classes of computation. E.g., we know that there are problems that require quadratic time. If your problem is complete for quadratic time, that’s it. You know it can’t be done faster.
Similarly a lot of other obtuse methods have been used in more mundane scenarios. E.g., Cohen’s forcing technique.
People are fascinated with the Tree function since it’s amazing how complex you can get with so little to start with. I think that people will be coming up with tiny variations of Tree that grow at more reasonable rates giving categories of functions that represent useful complexity classes and therefore can lead to new insights into those classes and how they relate.
Is TREE a computable function? The information I’ve found seems to indicate it is. If so, that means the Busy Beaver function must exceed it starting at some value N. Is it even possible to guess what that value of N might be?
I answered my own question: TREE(n) is definitely a computable function, since you could have a simple algorithm to generate trees and check to see if any of them are imbeddable into later trees. You couldn’t really compute it, but you could with an idealized computer with infinite computing time and storage.
A quick check shows that it’s been proven that Graham’s number is <= BB(64) but that is no doubt a horrible upper bound. Turns out there’s a whole site devoted to these sort of things.
So one practical aspect: It generates a place for large number nerds to hang out. (From their home page: “We have reached 0% of our goal of 10[sup]100[/sup] articles!” I assumed they rounded the 0%.)