Is there any pratical use to really large number mathematics (e.g. Graham's Number or Tree(3))?

Watching some YouTube videos of the biggest numbers ever used in a legitimate calculation usually gets you things about Graham’s number or Tree(3) (as an example).

The numbers represented in those things is beyond huge. Indeed while Graham’s number is truly ridiculously huge Tree(3) makes Graham’s Number seem a spec.

As is noted in the Graham’s number video, if you could hold that number in your head your head would literally collapse into a black hole. Tree(3) is waaaay bigger.

My question is, while it is fun to think about these things, are they of any use whatsoever? Or are they just play things? The explanation of Graham’s number, while a real calculation, seems completely random and more a math puzzle than anything that is useful.

Am I missing something? Is working these sorts of problems out a useful effort in advancing mathematics?

Graham’s number arose as an upper bound in a combinatorial problem, which seems eminently practical. That tree function fits into the same general theme in a sense, as it concerns counting trees (the combinatorial structures, not botanical ones).

In short, these numbers arise from real mathematical investigations, not some random puzzles.

ETA Wikipedia actually has a decent explanation of the history of those numbers, including links to mathematical references.

Kruskal’s tree theorem which I think is a parent to TREE(3) is pretty useful for functions like finding a minimum spanning tree.

Here is a graph library that shows an example that it is used as this format doesn’t really let me demonstrate how to use it.

Generate edges in a minimum spanning forest of an undirected weighted graph.

Graph arborescence and pruning are pretty critical in modern systems Big-O algorithm time complexity becomes a problem.

I can’t think of how to describe this but showing that a seemingly random tree’s complexity will go to zero is of huge value.

Producing a tree with a minimum sum of edge weights is one way to think of it. As the common ancestor is the criteria in TREE(3) like solutions can be much faster than a DFS for a directed acyclic graph which is np-hard as an example.

Hint for programmers:

If your algorithm is O(TREE(n)) then you need to go back to the drawing board.

OTOH, if you have an algorithm that is O(n * InverseTree(n)), for example, you can treat it as linear time.

There are several algorithms where log* or inverse Ackerman’s is a factor and you can treat those as constants in practice. Even log log n is going to be 7 at most.

In Computer Science there are occasionally algorithms discovered where the constant factor is large-ish, in the 100s or worse, e.g., 233 n[sup]2[/sup]. But once you know it’s quadratic then typically people can find algorithms with better constants.

This doesn’t apply to the particular numbers cited by the OP, but it might happen with other bounds.

I wrote a paper once where I just came up with a quick and dirty very large bound for something to show that certain classes were distinct. I could have really dropped the value down if I wanted to. But since I only needed to prove such a bound existed, why bother?

Ah, I see. :cool:

Yeah, for most of these problems, the real goal is to show that something is finite. If you can find some defined number, and show that the thing you’re looking for is always smaller than that, then you’ve done that. And if finiteness is literally all you’re looking for, then you don’t need to be efficient about defining that upper bound, and an easy-to-find upper bound that’s ludicrously large is better than a hard-to-find upper bound of a reasonable size.

If finiteness isn’t literally the only thing you care about, then it might be useful to have a smaller upper bound. But even in that case, the ludicrous bound is still useful, since the method you used to define that might provide a framework that subsequent researchers can use to find smaller, more practical, bounds.

EDIT: Oh, and there’s a very real sense in which G or TREE(3) or whatever is actually incomprehensibly small. There are vastly more numbers larger than TREE(3) than there are numbers smaller than it.

There’s a sort of paradox in my intuition; I wonder if it affects others.

An infinity like ℵ[sub]0[/sub] doesn’t particularly bother me. P(ℵ[sub]0[/sub]) or P(P(ℵ[sub]0[/sub])) seem humdrum, boring.

But TREE(3) or Graham’s number G(64) or, heaven forbid, TREE(TREE(3)) or G(G(64)), while “smaller than infinity,” just boggles the mind. The very existence of such numbers seems unsettling!

One can invent various weird “puzzles” whose solutions involve these large numbers, though that is not usually how they were originally described.

For example, consider a hydra (rooted tree) with many heads (leaves of the tree). If you cut off a head directly attached to the root, it does not grow back, but if cut off a head not directly connected to the root at time n, it regenerates n copies of the part of the tree above the branch node from which the head was removed. How many steps will it take to cut off all the heads? A lot… but the point of this type of theorem is that it is a finite number.

Yup, septimus, that’s quite common. Most people’s notion of “a million” is larger than their notion of “infinity”.

It would take me longer to count to a million than it would to know that I can’t count to infinity. A million is actually a number, infinity isn’t.

In any case, what I’d love to see these numbers used for is some of these resource management incremental games.

They already get up to numbers that most would consider indistinguishable from infinity. I would love to see them add some up arrow notation in the numbers, though, that’d take it to a whole new level.

If you really want to pop a brain gasket watch this video. It is some extra footage from the video on TREE(3) I linked in the OP (I did not know it existed when I wrote the OP hence the delay).

Basically the guy talking about this talks about how long it would take to solve this problem using finite mathematics. The short version is, while it can be done in principle, it is essentially impossible. No matter how fast you worked (within literal physical limits of the universe) the universe would reset itself way before you ever managed to do it. That and the proof is too big to fit in this universe.

Mathematicians have other (non-finite) means to get answers to these questions fortunately.

1, 3, TREE(3) is a pretty rapidly growing function. :smiley:

While watching the follow-up video I linked just above the speaker mentions Kruskal’s tree theorem that TREE(n) can not be proved to be finite using finite mathematics BUT you CAN prove that TREE(n) is finite using finite mathematics as long as you give a value to (n).

So, you can prove TREE(3) and TREE(4) and TREE(98234698234) are finite (in principle using finite mathematics but you cannot prove it merely saying no matter what value you use is finite.

I hope I got that all right and haven’t lost anyone (watch the video…he explains it much better than I do).

What I do not understand is how that works. How can it be said you cannot prove TREE(n) is finite but you CAN prove it is finite for whatever number you choose to put in place of (n)? If I can prove any value of TREE(n) is finite (using finite mathematics) then why can I not say all values of TREE(n) are, ultimately, finite?

That’s what blows my mind.

TREE(1) = 1

TREE(2) = 3

TREE(3) = A number waaay bigger than the universe.

Completely weird.

As Chronos noted, TREE(3) is just a tiny itsy-bitsy value in the greater scheme of things. Most numbers are bigger than the universe and, by the same token, I would expect that most number sequences should expand faster than TREE(n)… True? Obviously they don’t today, but most number sequences are things like i[sub]n + 1[/sub] = i[sub]n[/sub][sup]2[/sup] + 1. We’re usually choosing functions with tiny constants, which isn’t how most number sequences should be.

This reminds me of this interesting fact. If you pick a positive integer less than a googol in magnitude at random, it is almost certain that that integer will have a “9” in its base-10 representation. But for integers in the range we see on a day to day basis (say, integers less than 2000 or so), having “9” in the base-10 representation is fairly rare. TREE seems unusual to us, because it’s not much like the functions we need in daily life - but as you suggest, if we picked a function “at random” somehow out of a bigger set of possible functions than we usually imagine, TREE might seem much more typical.

From the Hitchhiker’s Guide:

For me I think it is the amazing progression that blows my mind…1, 3, ludicrously huge.

When discussing a function rather than a single natural number, the rate of growth of the function is of theoretical and practical importance. A function that grows too fast may not be primitive recursive or even computable.

As Whack-a-Mole notes, “finite mathematics” (ie Peano arithmetic) cannot prove that Friedman’s TREE(n) function is defined for all natural numbers, even though for any n it can prove the statement “TREE(n) is finite”. Simply trying to combine all the statements “TREE(1) is finite”, “TREE(2) is finite”, etc., obviously does not result in a finite proof.

One reason such functions may be “blowing your mind” is that most familiar functions are primitive recursive, and there is a limit on how rapidly such functions can grow with n. So encountering “in the wild” a function from outside this comfort zone may seem weird.