in that case, though, your B(n) function grows a lot slower, at least for smallish values of n.
Not really. We aren’t concerned with what the 1’s on the tape represent when we talk about B(n)–just the number of them.
still, if we have an alphabet of two letters (0 and 1), and limit ourselves to only two states, then B(2) must be pretty darn small.
I’m guess, in fact, that it’s something like 2, with B(3) not being much larger.
Or else I’m still misunderstanding how this works.
B(1) is 1, B(2) is 4, B(3) is 6, and B(4) is 13. B(5) > 4098, and B(6) > 1.29 * 10[sup]895[/sup].
Explain to me exactly how you think it works, and maybe I can clear up your misconceptions.
That sounds reasonable, actually. I was just trying to say that the first few values of B don’t instantly explode into hugeness, although the jump between B(5) and B(6) is pretty hilarious.
Oh, definitely. And keep in mind that those are just the best lower bounds–the discrepancy could be much greater than we know.
I would also like to know this, please.
Scientists don’t care. Mathematicians do. Well, computer scientists may care, but I don’t know of any practical application. There’s an article over at MathWorld which is probably the best summary you’ll get.
ftg could probably tell you the practical significance much better than I can.
Thanks, ultrafilter, for the vote of confidence. Let’s see if this can help.
First of all, the largest Math numbers I was previously familiar with were the Skewes numbers. Not as big as Graham’s, but it’s significance is better known (to me).
The great Number Theorist GH Hardy once said the reason he liked Number Theory so much was that it had no practical application. As a grad student, on my way to my first conference, I found myself in the back seat of a car with Ron Rivest and Len Adelman, who the week before (with Adi Shamir) had come up with the famous RSA encryption algorithm based on supposedly arcane Number Theory. Hardy had to be spinning in his grave.
Lesson 1 about Math: Nothing is so theoretic that it can never have practical applications. Look at non-Euclidian geometries, etc.
Anyway, RSA encryption is based on the apparent difficulty of factoring very large numbers. (Nowadays, the numbers used have thousands of bits.) This means there are a lot of people interested in knowing everything there is to know about primes and factoring. While Skewes numbers may not themselves be useful, the techniques used in finding them are really quite interesting. There appears to be some interesting “deep structure” in the pattern of primes that is just beyond our current understanding. A little more understanding might lead to interesting breakthroughs. E.g., breaking RSA encryption. Goodbye PGP. You can bet the NSA has a lot of Number Theory working away on this stuff all the time.
Now, back to Graham’s number. I don’t have access to the key papers, but going by the Wolfram blurb certain things leap out:
Hypercubes. If you are going to build a high speed parallel computer, the connections between processors is probably going to based on a hypercube or hypercube derived graph (butterfly, shuffle-exchange, etc.) Very simple to build and to route. But there are bottlenecks that easily occur. So people are quite interested in finding (and fixing) such bottlenecks. So studying hypercubes is A Real Good Idea.
Ramsey Theory. Not a lot of apps. of Ramsey numbers in CS yet (outside of their use in Number Theory directly). But I know of at least one in my own little area of specialization. You have a bunch of computers in a network, all the same (more or less), and you need to “elect” one of them. Having one process be the boss makes a lot of other things simpler. Ramsey numbers appear in proofs about bounds on the number of messages and the length of time it takes to elect a process depending on the initial starting assumptions.
Note that the bounds given for Ramsey numbers are typically very far apart. (Graham’s is a quite good example of this.) This is maddening to us CS types. We want to know “Is it 30 or 40” and they tell us it’s somewhere between 6 and a number that takes special notation just to write down. A breakthrough in bounds on Ramsey numbers would not only help understand existing problems, but encourage people to apply them to new problems. (A lot of people roll their eyes and give up when they hear “Oh, that’s a combinatorial problem that can be attacked using Ramsey Theory.”)
Note that the Real Important Thing is technique. A breakthrough for an obscure value might lead to breakthroughs for a lot of problems of general interest.
Note that questions like “How big of a certain type of graph do you need before it has a certain property?” (besides being an example from Ramsey Theory) has lots of applications.
Suppose you want to build a really great local phone exchange. You want it to have the property that anyone can call anyone without getting a busy signal (assuming the line is not actually busy). If you have called Mom on Mother’s day you see the problem. There are graphs call “expanders” that can be used, and we know if we make them a certain size they will work but the constants are lousy. Make the constants better, you have a much better way of building phone exchanges.
Anything arcane in Math seems to get used sooner or later. I had a graduate student in Math who wanted to do a joint CS PhD. Her area was Algebraic Topology. I didn’t know any applications of it back then. Now, the biggest result in my research area of the last 10 years is based on Algebraic Topology. Oh well.
(An undergrad student of mine did a summer under Ron Graham a few years ago. She did her thesis under me based on that work, but it wasn’t related to this problem. Other than that, my path hasn’t really crossed his in recent years so I’ve never been in a position to learn more about his number.)
Fascinating. Absolutely.
Interesting stuff, ftg. By the way, can you point me towards a proof that Ackerman’s function is not primitive recursive? It sure looks that way to me.
Here’s one proof. Note that it’s two parts, with the 2nd containing the answers to the Lemma proof exercises of the first. Use 2 n’s in Ackermann when Googling.
Thanks a bunch. I don’t quite understand it yet, but I haven’t done much more than skim it.
**
See, that just shows that google’s fuzzy search isn’t all that it should be.
YES, IT IS FASCINATING. WELL, NO - MORE LIKE HYPNOTIC. SO, NOW I REMEMBER WHY I HATED MATH, THEN ALGEBRA, THEN GEOMETRY, AND TRIG AND CALCULUS AND ALL THOSE OTHER COURSES I TOOK IN HIGH SCHOOL AND COLLEGE… MY HEAD HURTS… I JUST COULDN"T STOP READING THIS THREAD AND NOW WHAT LITTLE OF MY GREY MATTER STILL REMAINS IS BEGGING ME TO SCREAM FORTY-TWO AND RUN FAR, FAR AWAY!!!
42 ! ! ! ! !
Just for the sake of curiosity, I decided to rewrite the sequence of f’s in the OP as a single function. For technical reasons, I decided to start counting from 0 instead of 1.
We need an intermediate function g. g is defined by g(a, b, 0) = a[sup]b[/sup], and g(a, b, n + 1) = a[sup]g(a, b, n)[/sup]. That takes care of the | operator.
Now we can do f. f(0, 0) = g(3, 3, 0). f(0, n + 1) = g(3, 3, f(0, n)). f(m + 1, 0) = f(m, 0). And finally, f(m + 1, n + 1) = f(m, f(m + 1, n)).
The recursion in the last case does terminate eventually, because n will get down to zero. More in a couple, including why I posted this.
OK, I’m back now.
The reason that I find this specification of f interesting is that, as long as the first parameter (corresponding to the subscript in the old sequence) is fixed, f is primitive recursive. But if you let it vary, then it’s not (I think–that would require a proof, and that might take a while).
I don’t think that there’s an intuitive notion corresponding to the distinction between primitive recursive and recursive. When I get home, I’ll check the definition and see if I can come up with something.
Nope, no intuitive notion. That’s why it’s tough to prove that something isn’t primitive recursive.
:smack:
…except for the really obvious one. Primitive recursive functions can be computed in a language like C++ or Java using only if-else statements and loops with a fixed number of iterations. Functions which are recursive but not primitive recursive require a loop with a variable number of iterations.
By “fixed” do you mean “the number of iterations is set at the time of entry into the loop” and by “variable” do you mean “the number of iterations is not set when the loop is entered?”
E.g.,
for i = 1 to 10
for j = 1 to i
do some math on i and j
The inner loop is “variable” but i is known each time the loop is started,.
OTOH
while n > 1
if n odd then n = (3n+1)/2 else n = n/2
The length of the while loop is not “known” (loosely speaking) when the loop is entered.
I thought such restrictions put you within the Grzegorczyk Hierarchy which (in complexity terms) gives you “merely” fixed nesting of exponentials and thus well below general primitive recursive.
Yes, I’m pretty sure that’s what I meant. This isn’t something I pulled from my own expertise, though; I read it on a (reputable) website. I’m also pretty sure that Hofstadter covered this, so I’ll look it up when I get home.