Monitor Displays Every Possible Combination Of Pixels (And Thus Images Of Everything)

On the 256 colors thing:

In a 640x480x256 image, Each image only contains, at most 256 colors.

However, a collection of such images could contain a far greater number of colors.

This is done by storing, along with each image, a palette that contains the 256 colors that allow the best representation of the source image according to whatever algorythem was used to reduce the color space to the 256 entry palette.

This permits distinction between the shades of bananas in one of the examples give by another respondant.

The palette is a lookup table that maps the 256 colors to a RGB “recipe” that usually has at least 5x6x5 bit resolution. …yealding 64K possible colors.

You are correct in your numbers but those typically are not called pixels but photosites, the photosensitive areas on the sensor. They are arranged in a bayer patter and that is samples to get a normal bitmap unless of course the camera is set to record a raw image. FWIW Fuji uses a pattern of Hexagons and the Foveon sensor used by Sigma senses all three colors at each photosite.

Wrap your brain around this another way. Start with a monitor that has only 3x3 pixels and only two bits per pixel for black, white and two shades of gray. That is 262,144 or 8k combinations. It won’t take long to run though all the combinations. In that time it will create every image possible to imagine. The limiting factor of course is the resolving power of the display.

Look at it this way. I seat G. Washington for a portrait with a digital camera having the same resolution as the screen. The resulting image will match one of the combinations perfectly. Of course a portrait of my cat, Dusty, could very easily match the exact same combination. It wouldn’t even matter if she sits still as I don’t think any display is goign to look like motion blur at this resolution. The point is that it did display every image imaginable but many of them are redered as exactly the same bitmap.

Switch back to the 1280x1024x16 million color display. One of the images exactly corresponds to Richard Nixon with 12 hours of stubble. Of course there is the possibility of Richard Nixon in the same pose with 12.5 hours of stubble. The reality being captured is different but the difference is below the modest resolving power of the so called high resolution screen so that screen images will work for several different but similar realities. This is how a finite number of images corresponds to an infinite number of possible realities.

And of course one of the images will be Joan of Arc with the upper lip painstakingly shaded over three hours by Napoleon Dynamite. The wonders of the universe.

It’s also worth mentioning that 640x480x256-color screens are very 1990’s.

A modern computer will usually have at least 1024 x 768 pixels, and almost always 24-bit color (i.e. 256 shades of red * 256 blue * 256 green). Monitors with nearly 3000 x 1200 pixels (widescreen) aren’t uncommon any more.

Which means that those “itsy bitsy” numbers people have been throwing around are many, many orders of magnitude too low for a modern monitor.

For that 2[sup]256[/sup], calculation I just remembered that 2 is approximately 10[sup].301[/sup]. So that converts to 10[sup]77.056[/sup]. Close enough for government work anyway :slight_smile:

I learned logarithms back in the slide rule era.

Yeah, but who has the time to look through all of those images? I’m a busy man here. Got other stuff that needs doing this week. The set of all 640x480x8-bit grayscale images, on the other hand, is much more manageable.

Let’s see now. If we assembled all those images into a QuickTime movie, with a frame rate of 30 fps, the movie would take about …

… about 10[sup]739802[/sup] years to watch — which is only about 10[sup]739792[/sup] times longer than the current age of the Universe.

See? Not so bad.

No, he missed one zero, and one five.

Hmm, I had a brilliant idea for a moment, but the large numbers overwhelmed it.

Consider a human-driven search program for our images. We’ve got that 256-color, 640x480 screen, and therefore 256[sup]307200[/sup] different images. We (systematically) pick two of them, present them to the user, and say “which one of these is closer to the image you were looking for?” Based on the user selection, we refine the image a little bit, and present two more…

There are some usability issues here, sure. Given a target image of a bunch of roses, you’re presented with a pure black and a pure white screen. Which is closer? Further, once you’re “close” to the image, the distinctions are going to be imperceptible (like those “which one looks better” eye tests for nearsightedness). But let’s assume that the user not only can make the determination correctly, but that he/she makes no mistakes.

So each pair of images divides the search space in half (that is, it resolves one “bit” of our information). So we just need to determine the number of divides.

Throwing a little log work in there, 256[sup]307200[/sup] is equal to 2[sup]2457600[/sup], which means that a mere two and a half million perfect, mistake-free choices will get you your target image! You folks work out the details, I’m going to start filling out the patent application now.

That’s the same thing as asking 'em, “do you want this bit on or off” for each and every bit. Two and a half million times, eh? How much time are you giving them to decide? Let’s say five seconds, we don’t want to rush them, their decision has to be perfect. So, 12 million seconds, 200 thousand minutes, 3 thousand hours, that’s just over a year and a half of work

Just wanted to nitpick: There’s always finite resolution in analog media too. Take film for instance: the resolution limit is equal to the size of the individual film grains, which is less than you might think. It’s been shown that high quality 6 or 8 megapixel digital sensors (like those in dSLRs) have roughly the same resolving power as ISO 100 film (35mm format). The 16MP Canon 1DsmkII resolves roughly what medium format can do. There’s always a limit.

It’d be interesting to create a SETI@home-like screen saver and see how long it would take to randomly find a few select images.

The application would have a stored images of, for example, the Mona Lisa, a picture of a family at the beach and an Excel spreadsheet. As it created random images, it’d compare them against the target images to see if it had a match. The screen saver’s users would download a specific range to search in and report the findings back to a central computer.

Perhaps it’d still take too long to get a result, but I’d be a fascinating practical example of the old “infinite monkey at infinite typewriters” saying.

(I suppose on reflection, though, that someone could work backwards, figure out where the Mona Lisa would fall in the data set and then calculate how many computer hours it would’ve taken to get there.)

You’d need to decide on criteria for what is a close match. What SETI does is very different. They are looking for anything that looks like an intentional message. That is much more likely to get a hit than only looking for a broadcast of All My Marklars or The Ten Marklars.

Hmm, A machine that displays random numbers and every once in a while they make a meaningful pattern and requires that dilligent operators spend thousands of hours looking for those patters. ! Las Vegas is full of them.

By SETI@home-like, I mean that the screen saver would use distributed computing to divide the problem between hundreds/thousands/millions of computers over the internet. Determining a match would be simple. The application would be looking to randomly produce specific images—not just anything “recognizable”—so it would have the exact pixel combination required to replicate, for example, the Mona Lisa to compare each iteration against.

Wouldn’t the exact negative image of the Mona Lisa be considered a complete non-match, if only going by pixel matching? Wouldn’t you need something more like a neural net that can detect anything that’s “sorta like” the Mona Lisa?

There are an infinite number of Mona Lisa pictures you could take (different angles, brightness, contrast, etc.) You could pick a single photo and look for that specific image for instance. If you wanted something that was more representative, you could pick 10 or so shots of the Lisa from different angles, different lighting and then add into the comparison code some stuff to check if the colors in certain areas match up within a reasonable range for any of the 10 images. Of course, the processing time then goes up exponentially.

Keep in mind these numbers we’re tossing around in here are pretty big.

For example, let’s say we build a screamin fast computer that can build generate 100 million screen images a second. Call it “Deep Image”. For comparison, I figure a typical gaming rig can do about 10 000 per second for 640x480 if it’s all random pixels. So, if it’s “Image@home” we need about 10 000 computers at work to give the same. Seems not unreasonable.

Ok, now fire that sucker up.

In a year, we’ve gone through 3.15576x10^15 combinations. Seems like a lot.

But just to match the first 10 pixels of the Mona Lisa means brute forcing through 1.2…x10^24 combinations, which is nine orders of magnitude higher than what we can look at in a year’s time. Works out to 383 million years for a measily 10 pixels.

If you want the whole Mona Lisa, even at the low resolution we’re considering, then you can convert every atom in the universe (~10^70 atoms) to one of our “Deep Image” computers, let it run for the life of the universe (~10^18 years), and well, now here’s an astonishing result!

You can then get the first 42 pixels of the Mona Lisa. Almost 43, but not quite…

Hey, I think I just solved the answer to life, the universe, and everything in it!

Hey, I was thinking the same thing when I googled a little and found your thread.
Not only the images, consider 100X100 B/W image, you have 255^10000 images. But only a very small percent of those can be perceived by us. It might be a beautiful modern art as well.
But the most interesting thing, we have an image of text in 100X100 pixels too. I mean we may have information as relevant as “bill gate’s bank acc. no. is XYZ.”
Even the deepest secrets of universe the outcome of (billion dollar) LHC, what will happen to you in the next 10 yrs, Everything can be revealed in this image. Doesnt this give the sound of infinitum?
One can write thousands of billion dollar ideas all being one out of those possible 255^10000 images. Interesting!
If we are able to eliminate before generation what might be relevant and what not, we can generate the information of relevance with each blink of an eye…

In fact I did an experiment in which I randomly generated an image and did a cross correlation with LENA. In four hours I got an image that had cross correlation of about 0.1. Not bad though!! :eek:

Any such discussion is incomplete without a mention of Jorge Luis Borges’ masterpiece, The Library of Babel.

An old thread with much the same theme as this one (entitled ‘Digital Camera’s can only take X number of photo’s, and no more?’ (sic))

This thread is older now than that thread was when this thread was first posted.