Can humans create random numbers?

Are you talking about looking at a particular finite sequence in itself and pronouncing it random or not? Or looking at a probability distribution of finite sequences and pronouncing it random or not? Or looking at the entirety of a particular infinite sequence, all at once, and pronouncing it random or not? Or looking at the digits output by some particular process one by one and coming to regard the process as random or not? Or looking at a probability distribution of infinite sequences? Or…?

So far as I can tell, ultrafilter is saying “Given a particular infinite sequence, we call it ‘random’ if no algorithm can predict its bits with a limiting success rate better than chance”. He is giving a rule which defines particular infinite sequences as either random or not (same as one might define particular infinite sequences as computable or not). Whether this rule has applications to what you are concerned with depends on what you are concerned with.

If you want to be formal about it, the appropriate framework is something like the PAC model adapted to online learning.

Yes, to show that a sequence is not random, you can simply exhibit an algorithm that learns it. Proving that a sequence is random is considerably harder.

Well, what about the digits of the fine-structure constant? Are they random? Presumably not, since you can calculate the fine-structure constant from quantum field theory, so there’s certainly an algorithm that gives you each digit.

But that doesn’t mean you can “work out the pattern” from the previous digits. You’re just saying “Hey, that looks like a particular number that I recognize and know how to calculate.” So is our ability to recognize the fine-structure constant as “not random” dependent on the fact that we have a theory of quantum electrodynamics?

If that’s the case, it seems like proving a particular infinite sequence of numbers is random is not just difficult, but impossible. I mean, who is to say that number isn’t just some important fundamental constant of a physical theory that hasn’t been discovered yet, and when we do discover that theory we’ll know just how to calculate it?

In what sense is there an algorithm which computes the digits of the fine-structure constant? Could you write an input-less computer program which output them sequentially? It doesn’t appear as though anyone knows how to calculate it to arbitrary precision without going out there and measuring things, which hardly counts.

ignore this post

I guess that’s a bad example, since (if I remember right) what you’d do is calculate the fine-structure constant based on g, or vice versa. So you do have to measure something.

I had thought about using pi for the point I was trying to make, but I’m not sure that’s the best example either . . . . I’ll have to think about it a bit.

So, yeah, as it happens, we don’t know how to compute the fine-structure constant; at least, not the same way we know how to compute, say, sqrt(2) or π. But sure… perhaps the fine-structure constant is computable, even though we don’t realize it. Hell, perhaps, after the millionth digit, it’s all 0s. Or maybe it isn’t. I don’t know how to prove that either way. But the truth of the matter needn’t be tied to our ability to prove it.

Anyway, maybe, it turns out that, on a certain computability-based account of what “randomness” is, the fine structure constant is “random”. Definitely, by a certain “is it describable from the physical laws of our universe?” based account, we would not call the fine structure constant “random”. By other definitions, we still would, and by yet other definitions, we would say it is meaningless to speak of the randomness of a particular number, and would only speak of randomness of probability distributions. Formalizations don’t have to be one-size-fits-all; the word “random” isn’t used in just one way.

And, yes, many formalizations of “randomness” will naturally be context-sensitive, in the sense that they will be relative to a particular language of description or knowledge base or what have you. (E.g., one natural notion of “randomness” of particular objects is taking “X is random” to mean “X satisfies every property which provably has probability 100% [in a certain proof system, modelling a certain probability distribution, using a certain language of description of properties]”. In this case, what is random depends highly on what proof system we are looking at. A very relative notion, yes, and yet still a useful one with clear ties to the intuitive concept of randomness (for example, in typical systems, we will end up being able to say things like “Every random real in [0, 1] is irrational, any random pair of such reals consists of two inequal random reals, any random sequence of coin flips has a 50-50 split of heads and tails in the long run, etc.”.)

I guess my point is, if a number means something, you can calculate it (in principle). I mean, if I had no idea what pi was, I wouldn’t necessarily be able to answer the question “What’s the hundredth digit of pi?” based on the first 99 digits. But because I know it, I can calculate it.

So were the digits of the Euler-Mascheroni constant random until Euler came up with it? Presumably the digits were always non-random (because the algorithm for them was always out there waiting to be discovered), but my point is who’s to say any “random” sequence of digits couldn’t turn out to be some important constant in the mathematics of the 22nd century?

It just seems that proving some sequence of numbers (of unknown origin) is “random” in this sense would be not just difficult, but impossible.

This isn’t really true (unless you’ve gerrymandered your definitions in such a way as to sap it of all significance). Are you familiar with the theory of computation, the unsolvability of the halting problem, etc.?

For example, here’s a definition of a quite significant number B: B’s n-th digit is 0 if the n-th program (in your favorite deterministic, input-less programming language) runs forever, and 9 otherwise.

That number means something, right? And yet, I defy you to come up with a program which computes its digits, one by one. (Hint: it cannot be done).

No need to even get into such specifics. An RPG group of mine came up with an ingenious yet perfectly fine way of generating D10 numbers without dice. Flip a book, read the page number, discard all but the last digit. There, random 1-10 number. No influence of prior results whatsoever (even if we assume players are going to aim for the general page cluster which yielded them their last critical success)

Don’t you have to choose left or right otherwise it’s all even or all odd? Seems like that choice would reduce it’s randomness.

Maybe you could do a system where you first half the page number, then use Kobal2’s method?

Hmm, it’s not a bad system for very casual use, but since some pages flip openly more readily than others, you’d notice after an hour or two that it’s not a great random number generator.

Yeah, I realized as I was posting that that that wasn’t quite right. (*) Really what I meant was “There are numbers that you can only calculate because they mean something.” That is, it may be that the digits of pi are sufficiently random that you could never calculate the Nth digit based on the pattern of the first N-1, except if you recognize “Hey, this is pi, and I know how to calculate pi.” That is, maybe the digits are statistically random, so that no pattern except being pi arises. No?

If so, it seems our ability to create an algorithm to predict the Nth digit (for any N) is dependent on whether or not we’ve discovered pi. But then, any “random” sequence of digits could turn out to be the solution to some as yet unposed problem, one which is solvable by an algorithm – and thus, not random. No?

(*) Is this really a gramatically acceptable use of three consecutive "that"s?

I’m not sure what you mean by this. Supposing we take a number to be calculatable just in case there’s some C++ code which outputs its digits in order. Then it doesn’t matter whether a number “means” anything or not; whether such a program exists or not is presumably independent of whether we know about it.

No. I just gave an example of a number whose digits provably can’t be computed by any computer program: a number whose digits encode the solution to the halting problem.

Yes, it’s perfectly acceptable.

My point, though, is you’d never know it was calculatable if it didn’t mean anything. I mean, there’s an infinite number of possible algorithms that might produce the number, and you can’t possibly check all of them. So if you don’t know “This number is the solution to such-and-such problem” and the number passes every statistical test for randomness, then how do you ever find out it wasn’t random. Incredibly lucky guess?

My point is not “Whether a number is random is dependent on whether we know that number’s significance.” My point is “Whether we can tell a number is random may depend on whether we know its significance.” Do you disagree? If I’m right on this point, it seems we can never know for a fact that a particular numerical sequence is random.

You gave a way of defining a sequence of digits that provably can’t be computed. I’m not talking about that case, I’m talking about you giving me an explicit sequence of digits and asking me to test if it’s random. It seems like there’s no way to say “Yes, it’s random.” I can either say, “It’s not random because here’s the algorithm that generates it” or “I have no idea”.

How would you know that a particular algorithm generates it? That’s also an infinitary search (making sure each of infinitely many digits matches up).

You’re right that this definition of “randomness” involves infinite quantification. I just don’t see what that has to do with whether the number “means” anything, or has any “significance”, unless “the meaning of X” is just another way to say “a program that generates X”.

Let’s put it this way: how could we ever tell for a fact that a particular number was equal to π? That’s also a task which cannot be finitarily pulled off; even if you saw that the first million digits lined up with π, the number you’re looking at still could differ later on. But with inductive reasoning, one might choose to conclude that it is likely equal to π all the same.

Well, similarly, (iterated) inductive reasoning can, potentially, demonstrate that a particular number is uncomputable (or whatever else). Instead of inductively establishing that each of its digits matches that of π, we inductively demonstrate that each computer program misses one of its digits. Just the same sort of thing.

I’ll admit it’s not a perfect system (although it was an elegant solution to a local problem), but it is an example of human random number generation.

Nowadays, some crypto algorythms are based on similar random number generation, for example relying on the last digit of the exact time a number of keystrokes were made (down to microseconds, so a pattern doesn’t readily emerge), but the same could be roughly achieved with a basic chronometer. Start it, let it run while you do something else, stop it after a random time, jot down the seconds last digit, and repeat. During WW2, one-time-use random codes were generated by fishing playing cards out of a hat, or simple church-style bingo.

The point is : there are lots of ways for humans to generate truly random numbers, even without using machines to do so. Or at least, numbers random enough that the chance elements which led to their choice can not be realistically infered from the numbers themselves (i.e. you can’t reverse-engineer the generation process from the string of numbers, no matter how many numbers are generated).
The real trick is to design machines to do the same thing, without any human input nor a distinguishable pattern emerging over time.

But how do you inductively establish that every possible computer program misses one digit of a number? It just seems like there’s no way to do this for any actual sequence of digits.

I guess my point about “meaning” is this:

Let’s say I wrote out the infinite sequence 3,1,4,1,5,9,2,6,5,3,5,8,9,7,9,…
(Setting aside the issue of how I would write an infinite sequence)
And I asked you the question “Is this a random sequence of numbers?”

If pi hadn’t turned out to “mean” something, so as to cause us to eventually discover those digits in the process of calculating the ratio of a circle’s circumference to its diameter, it seems highly unlikely we’d ever have discovered the algorithm for it. Doesn’t it?

I mean, is there anything in the digits themselves that suggests any of the various formulas for calculating pi? It seems like if the digits are statistically random (which I’m guessing is true for pi), there’s no reason to pick the right algorithm out of the infinite space of algorithms . . . unless someone says “think circles” or something.