Think about it. Digital cameras take pictures and convert thrm in to grouped ‘pixels’, lined up next to each other. An individual pizel can only be 2 million or so different colours, which means 2 million different combinations. A ROW of pixels would therefore have a LIMIT to the number of different combinations you could have, hence an entire picture can only have X number of combinations. If you have forever to take/make pictures, you would eventually get to a point where NO MATTER WHAT you took a photo of, you would already have a photo of it. You physically could not take a photo that you didn’t already have.
It’s actually worse than you think, since lossy compression (such as jpeg) greatly reduces the number of different pictures possible. This reduction is proportional to the amount of compression- for example, if you have it set so it typically gets 10-to-1 compression, you’ve just reduced the number of possible pictures you can take by a factor of 10. Of course that number is still staggeringly large.
By the way, assuming a finite-sized book, there are a fixed number of possible books, as well. For example, if you limit yourself to a 1000 page book, with 80 columns of 50 lines per page, and a typical alphabet (say 52 letters plus numbers and punctuation = 90 characters), you only have 90^(10008050) possible different books. That’s still a lot of books, though
Why not program your PC to generate random sets of pixyls in all possible combinations and see how long it takes to come up with the one that looks like your Mom?
It means that since there are a finite number of pixels, eventually a picture of his mum, dad, sister and everyone else on the planet will pop up. Might take a while tho.
Of course, you could have worked this out yourself, given a little more time, I suppose…
It means that if the digital camera can only take a limited number of pictures, from the OP, then a computer might be able to generate all possible pictures. Including the ones of all your, or my, relatives.
I did not mean to cast any disrespect on your or anyone elses Mon. Just used that as an example of a picture that would be desireable.
Even scarier! One day the SDMB will reach a point at which it will be impossible to make a new, original post. Everything will already have been posted.
It reduces the number of combinations by a lot more than a factor of ten. I don’t know the actual filesizes of typical digital cameras, but let’s say that an uncompressed pic is a megabyte, and the compressed version of the same picture is 100 kilobytes. You’ve then reduced the file size by 900 kilobytes, so you’ve reduced the number of pictures by a factor of 2[sup]7200000[/sup], or approximately 9.3 * 10[sup]2167415[/sup]. Of course, this still leaves 2[sup]800000[/sup] possible pictures, or 9.9 * 10[sup]240823[/sup]. Looks like you’ll be clicking away for a while, there.
If graphics compression works anything like data compression, it doesn’t remove information. It just stores it in a more efficient format. If PKZIP lost half of an .EXE the zippee wouldn’t work upon decompression. For a text file, if there are 50 " "'s in a row, the algorithm replaces 50 bytes of " "'s with one byte containing the number of " "'s and a " " for a total of two bytes. That’s 25:1 compression and no data has been lost. That being said I am aware that .JPG format does drop every forth line or so. I just wanted to point out that compression is not necessarily destructive.
There certainly are forms of lossy compression, but they are usually avoided in compression of things like .exe’s and “data” files. With things like images or sounds, though, we can usually get by with only an approximation of the original. One of the main techniques of the JPEG format is to average the color values of neighboring pixels in order to reduce the amount of different colors so the image can be compressed in the manner you described above for text files. This results in a loss of information about that image, that cannot be gotten back. The same thing is done with audio compression. Information about the waveform, such as certain frequencies, are simply thrown away, and the sampling rate may be reduced. This also results in an irretrievable loss of information.
One problem with this type of process to produce all possible books, digital images, etc., is that it’s quite problematic how to label the individual books or images that result.
The number of labels required is, of course, as huge as the number of permutations allowed. Even if you could label them, distinguishing among the books or images by checking the labels would take as long as reading each book or scrutinizing each image.
This doesn’t read as clearly as I’d like, but I think you get my point.