Random patterns

What exactly is a random pattern?

At first I was thinking that it was an oxymoron. Next I thought that if I had a set of patterns and one was chosen at random then that would be a random pattern. Then I decided that I had no idea what a random pattern was.

Isn’t randomness essentially the complete lack of a pattern?

A random pattern, as far as I know, would be a pattern whose next value can not be known from the information given by its previous values.

1, 2, 3, 4 … is not random.

But if I gave you the series:

10.8, 10.3, 10.3, 10.1, 10.6, 10.2, 10.2, 10.6, 10.8, 10.6, 10.4

And asked you to tell me the next number in the series, you’d have a hard time with it.

But even though you wouldn’t be able to tell me what the next number in the series was, you can still tell me what the next number in the series is likely to be, and can even calculate a rough probability of a given number (say, 10.7) occuring next.

If so, then is askol’s post a random series? And if so, we’re back to the question, what is a random pattern?

Sure it is. A random series of numbers can be any series. I’m sure some lottery somewhere has hit 1,2,3,4,5.

Random patterns are (IMO) overall patterns (deterministic) that are composed of random elements. An Airy pattern comes to mind.

I wonder if fractals would count.

Turbulence?

The finite sequence { 1, 2, 3, 4 } can be random, but askol included the easily-overlooked ellipsis. Typically, this is meant to indicate “and you can figure out how the rest of it goes”. So { 1, 2, 3, 4, … } is the sequence of positive integers. By its very nature, anything with a “…” at the end is not random.

“TRandom pattern” is a sloppy idiomatic phrase. Don’t look to math for the answer, look to English usage.

Here’s an rough analogy.

We can ask how hot something is, and we can say that if two temperatures are different, then one is hotter than the other.

So that means it makes sense to say “-273 degrees C is 0.15 degrees hotter than absolute zero”. But somehow the word “hotter” doesn’t fit real well in that sentence. Far better to say “Absolute zero is .15 degrees colder than -273

Patterns can have degrees of orderliness from total to darn near zero. The latter limiting case is called “random”.

But putting the two words together is like discussing how hot -273 degrees is.

Finally, the term “pattern” is English, not math. It doesn’t have a precise mathematical meaning like the term “random” does. Mixing specialized terms and colloquiallisms is bound to produce some odd almost-sensible-almost-not phrases. Like “random pattern.”

Couldn’t agree more…

When I googled “random pattern generator” I got a number of results about UZ-I and UZ-II.
“UZ-II is a random pattern generator that draws various vector patterns.” http://uzii.narod.ru/
The files look like spirograph drawings. How are these “random patterns”?
What does it take to “generate” random results? I understand that a slot machine, for example, generates random numbers even when it is not being used.

In this sort of thing, I would say that what people mean by “random pattern” is a pattern that’s generated by some single or finite set of algorithms, but that the selection of which algorithm and/or the values of the input parameters are random.

A pattern is random if it has no shorter description than itself.

Hmmm! I know what you mean, ultafilter, but what about:

Adam
Red
A
1
Doh
Hydrogen
Mercury
Washington (or was it Hanson(?))

Which is a list of firsts (man, colour, letter, number, note, element, planet, US-president), hardly “random” but I can’t find a shorter description (unless you count “list of firsts” but that’s incomplete).


Anyway, so the digits of pi aren’t random because we have a beautiful shorthand for it, “pi”, and numerous finite descriptions of how to generate it. Likewise for e, or any other irrational number (? (I bet someone will pop along to give a counter example)).

But if I take the decimal expansion of e=2.718281… and choose to “step” through the digits of pi: skip 2, skip 7, skip 1, skip 8, etc. writing down the corresponding digits of pi (giving us something like 4, 5, 9, etc…) this generated sequence is describable in a succinct shorthand.

But suppose I hadn’t told you that? And suppose I’d started at the millionth digit of pi, and the 141421356237th digit of e – then I challenge anyone not knowing the algorithm to identify the result as non-random!


The Sierpinski Gasket is generatable in more than one way – of specific interest is the method that this page calls “the chaos game”, could the resulting pattern not properly called a “random pattern” (particularly, if the next node is not chosen (sufficiently) randomly the pattern does not appear)?

Stick it in a text file and compress it. If that combined with the program you’ll use to decompress it are shorter than the list, it’s not random.

That wasn’t quite the right answer–if there is a compression program that satisfies the condition I gave, it’s not random. If not, then it is.

The Great Unwashed:

You don’t need to tell anyone the algorithm: It merely suffices that such an algorithm exists and is shorter than the sequence itself. Since the sequence is potentially infinite, the algorithm you describe is (by definition) shorter, and so the sequence is nonrandom.

Look at it this way: The randomness of the sequence is independent from the intelligence of the person analysing the sequence.

My bad, I should have made it clear that I understood that, what I’m saying is that we have no way to classify data as random. Now I’m asserting this, as “just this guy”, it’s not something I know to be true – my argument amounts to this:

I could generate many infinite strings of seemingly random (but not) data that the best computing power and minds we have could not “decipher” into its underlying “minimal recipe”.

So when we see random data, we must ask ourselves, is it random, or can we just not see the recipe?

ultrafilter, I’ve just compressed the text file containing 1111111111…11111111111 (ten thousand ones), Explorer claims that the original file is 10k, and the output 1K (but I believe it is around 28bytes, we have the file allocation table to blame for the disparity), but WinZip itself is 500k. So your formula needs refining. Now, I’m not trying to argue the toss (I said in my first point I knew what you meant), but what constitutes “information” is very hard to pin down. I guess this is the point you addressed in your follow-up “That wasn’t quite the right answer…”, but again it kind of presupposes the “perfect” compression algorithm, which in turn relies on us being able to identify the pattern (which looks ominously circular).

That is to say, for every blob of data there is an optimal compression algorithm, but for very few (only some idealised cases (if any(?)) can we identify it.

If there is a compression program that satisfies the condition, we can eventually find it by trying them all in lexicographical order. If it doesn’t exist, we will never be able to show that except for a few special cases (by computer or by proof–randomness is an undecidable property).

“My” idea is hardly mine. It’s known as Kolmogorov complexity, and has been studied for at least 40 years.

It’s not necessary to identify the compression algorithm. Either one exists, or it doesn’t.

It’s semantics

In TheJungOnes OP a random pattern is a pattern selected from a group such as those ESP cards - random is an adjective modifying pattern.

What I am thinking of as a “random pattern” (noun) is the set of pixels used to test stereo vision. An image is generated such that each pixel has a random value. That image is then offset x number of spaces. When viewed in stereo a 3D image will appear.

As the random series goes - I can think of two possibilities -

  1. each number in the series is related by a single operation e.g. adding one to the previous number, squaring the difference of the two previous numbers in the series and adding them to the third, etc…

  2. a group of numbers in the series are related by a repeating series of functions e.g. 0, 1, 2, 6, -1, 0, 1, 5, 0 … (+1, +1, +4, -5).

The first case is really a special example of the second group. I can take n+1 in the series and change the operation. The same shenanigans can be done with the second group. After n in the series the next operation is made different from the first operation in the series.

Since this new n+1 series is a new pattern I will need 2*(n+1) elements to see if that pattern repeats. And then I can change the next n+1 operation again and again.

But this is just an irrational number!

An irrational number cannot have a pattern of repeating operations and therefore must be random since at n if a pattern becomes evident I can find a number with the n+1 value that will not fit the pattern. (I believe this is similiar to Cantor’s proof on the enumerability of irrational numbers.)

I didn’t see this last night, absolutely – that was my point, we can identify some data as non-random, but we can identify no particular data as random.

Is there an existence proof for random data that you know of?