Extreme Large Numbers

A fair point, although I think the scope of the OP is meant to be integers only.

I’ll argue that that statement also holds.

Is the set of all human-usable notation systems finite? I would say yes, given that there are only so many symbols we can possibly distinguish, and only so many ways of arranging them into expressions on paper.

Assume that’s true, for the moment. Then, for every notation system, there is a maximum integer that is human-feasible to be expressed in it. Since there are a finite number of notation systems, there is a grand, ultimate, maximum integer that can be expressed in all systems put together. Of course, whatever integer that is, an infinite number of integers will be larger still.

I’m kind of hoping you can shoot me down here.

[I wrote a long post below, but first, I’d rather just resummarize my main point: to say a number cannot be specified, in any potential notation system whatsoever, by a string of length less than X is, in some sense, to say that number cannot be in the range of any potential function whatsoever whose domain is the strings of length less than X. But of course that’s a ludicrous thing to claim; there are all kinds of functions on that domain, including, most assuredly, ones with the desired number within their range. All that’s really true is that for any particular fixed function on that domain, there exist numbers outside its range. It’s the difference between claiming “For every finite list L, there are only finitely many numbers n such that n is in L” (true) and “There are only finitely many numbers n such that there exists a finite list L such that n is in L” (false).]

It comes down to, what is meant by a “human-usable notation system”?

For any entity X, one might imagine a notional “notation system” under which, say, a single dot refers to X (and, let’s say, nothing else can be expressed). Of course, there are as many of these different contrived notation systems as there are entities one might wish to use them to “describe”, and their practical use in “describing” those entities is not very much. But what is it that makes them impractical?

Well, of course, the difficulty is in describing how to use such a notation system (“meaning is use!”). The potential implication being, one oughtn’t just place one dot in front of a person and ask them what it means; one ought place one dot in front of a person and then a description of a notation system under which to interpret that dot, and then ask them what it means.

But then we need a notation system for describing notation systems. And a notation system for describing notation systems for describing notation systems, and so on, in infinite regress. On this account, nothing could ever be specified…

But, of course, that’s not how it actually works, as might become clear from considering the analogous problem for, say, programming languages on a computer. There one understands how the regress ends: the computer has a native language (its machine code) which it knows how to interpret (in the sense of being able to carry out in the desired fashion the actions described by programs in the language) without need for further specification. And any other programming language one cares about is ultimately to be translated into this one, so far as execution on this computer goes.

Well, that amounts to saying that one is interested only in one notation system, the native machine code language of the computer. And, of course, in that one fixed notation system, as in any one fixed notation system, the vast majority of numbers have huge minimal specification lengths (for at most K many numbers have specifications within the K shortest strings).

But this still does not contradict the fact that there are infinitely many potential languages out there, and for every number, there is some potential language under which it has a very short description. It’s just the business of translating that language into a particular computer’s one fixed native language which may blow the size up.

Of course, nothing special about Intel architecture electronic computers here. You can think about any other program-executing devices just as well; e.g., humans. Though what a human’s “natively interpretable language” is could rather vary from person to person and over time… It may be that for one human, it is a very short matter to specify to them the instructions to do [whatever], while for another (perhaps generations later, having grown up with myriad different instincts in a very different culture) this is a laborious matter, or vice versa. [One could also fix a theory of physics, and ask about the minimal, in some sense, physical configuration carrying out some desired behavior; these are all just different cases of the question “What is the smallest X such that f(X) = Y?” and the realization that the answer depends on the function f]

I think I’m babbling now, but hopefully, the particular distinction I wish to draw has been well drawn…

Thanks for the substantive reply.

Right. I intended “human-usable” to mean something like “making use of human-comprehensible definitions and functions”, if that’s not too squishy.

Make it “comprehensible to professional mathematicians, in the current time and culture”, if that nails it down better. (Or let it mean “renderable in LaTeX” if that works better yet, and isn’t fundamentally limiting somehow.)

Doesn’t “blowing the size up” — i.e. compiling to a machine language program that runs out of memory, or takes a thousand years to execute — correspond exactly to the case of a number notation system that isn’t “human-feasible” for a particular number?

How about the [grand, ultimate, maximum integer] + 1?

You can’t express that number, by definition. That would require a bit more ink to print, or disk space to store, or human lifespan to read, than you have available.

I take the phrase “human-feasible length”, used upthread, to refer to the limits imposed by the physical universe. We can make the limits very generous, but they’re still finite whatever they are.

And while we’re at it, the expression “[grand, ultimate, maximum integer] + 1” does not express any number, since for any notational system capable of making statements like that, it’s impossible to know what the grand, ultimate maximum integer is. The problem is that not all expressions correspond to numbers, and it’s not always possible to tell whether a given expression corresponds to a number or not. So you’ve got a big pile of expressions, and some of them you know correspond to numbers, and some of them you know don’t correspond to numbers, and some of them you’re not sure. You can certainly find the largest number among the ones that you know are numbers, but maybe there’s one in your unknown pile that’s bigger than that.

I am happy to agree with this, but we ought to note that this amounts to a rejection, in this context, of the law of the excluded middle (that is, we are rejecting that one can disjunct exhaustively over the possibility “E does express a number” and the possibility “E does not express a number”).

[More formally, the phenomenon Chronos is pointing out is that, in intuitionistic mathematics, it’s not true that every partial function from a finite set to the integers has an upper bound to its range]

Thanks for pointing that out, Polycarp, we’ll get it fixed.

I was surprised to discover that I wrote taht Staff Report, way back in '99. Sheesh. Interesting to learn that the Brits are adopting the American style. Probably makes sense, since it uses the terms for a lower number and so would come up more frequently. I used to get to the UK fairly often, and so had lots of resources there; not so much in the last decade. I’ll put it on my to-do list to look at possible revision.

Huh, Indistinguishable didn’t find anything to quibble with in one of my mathematical statements-- That’s a first.

Would I be correct in saying that this is closely related to the undecidability of the halting problem?

It strikes me as closer to the Berry paradox, and Chaitin’s work on algorithmic complexity, but this is not my area of expertise.

Heh, I don’t mean to constantly pick on you, if that’s how you feel about it (though I believe I’ve expressed this insecurity before that I’m constantly doing so). I quibble with everyone’s mathematical statements… even my own posts are rarely unedited.

Yes, I would agree with that as well. If you were to idealize descriptions of numbers as programs which potentially output numbers, for example, then a solution to the Halting Problem would allow us to determine which programs describe numbers and which don’t, run all those who do, and find their maximum. And, conversely, if we could find the maximum number output by programs of a given size, we could just as well find the maximum number of steps taken before halting by programs of a given size, run any program of any size for the corresponding number of steps, and then determine whether it will ever halt, thus deciding the Halting Problem. But, of course, we cannot do either, by the typical diagonalization/quining arguments, of which Berry’s paradox (and Chaitin’s work) end up special instances.

Speaking of which, if I may veer off-topic for a moment, one thing that’s actually been on my mind for a while lately is that I recall I used to argue with you on several occasions about the viability of interpreting the meanings of probabilities via relative frequencies (this being your position on what “probabilities” definitionally are or ought to be, and my saying that couldn’t possibly be right or even coherent); I’ve realized I’ve become much more sympathetic to, and indeed am now basically a vitriolic proponent of, views along your lines here, but the opportunity to admit and explain the change in my perspective and give you your dues as essentially the winner of that argument has yet to have arisen organically (I also have difficulty finding the old threads in which we had these arguments, so maybe there were nuances I’m forgetting)…

Hey, no hard feelings. Thus is it always, between mathematicians and physicists.

You’re assuming that a given symbol can’t be reused, with different meanings, in different notation systems.

For example, I can define an infinite number of notation systems where the symbol “A” has a different meaning in each one. So in one system A represents 1. In another system, A represents 2. In another A represents 3, ect. You can always have a system where some one symbol can represent any number of any size.

Note that, although English-language use is now almost entirely short-scale, many other languages still use long-scale or short-scale or both. (Indeed, a hundred years ago, the usual terms used in discussing the problem were neither “short-scale” and “long-scale” nor “American” and “British”, but “German” and “French”.)

And engineers.

(Can you imagine, using j for the square root of negative 1 instead of i? What were they thinking?!)

I can imagine using j for square roots. I can even imagine all the mucking about that they do whenever they need to use tensors, though that’s probably just because nobody’s ever thought to teach them the Einstein summation convention. What I can’t imagine is throwing around g willy-nilly in all of their equations, just to attempt to keep a broken system of units from falling apart completely.

What’s the business with g in engineering? I’m not familiar (or if I am, I don’t recognize my familiarity…).

Trying to convert pounds-force (lbf) to pounds-mass (lbm). There’s a correction factor g[sub]c[/sub] that gets inserted. Except some silly engineers stick the correction factor in the formula as if it is a real term, and not just a units correction. Then they turn around and define g[sub]c[/sub] as 1 for metric. :rolleyes:

(Says an engineer.)