What resolution is real life?

As in, what monitor resolution would be needed to replicate the real world? Or I guess another way to say it would be, what is the smallest something can be before we cannot see it? I suppose you could take that measurement and use it as a reference for a monitor’s pixels.

An elephant can be small enough to not be seen… If it is fart enough away. In otherwords there is no single resolution value. It would vary with distance.

Lobsang: Eh, have you been watching “A Bridge Too Fart” again? Terrance and Phillip may be floppy-headed, but they aren’t historians. :smiley:

I’m sure the human eye does have a maximum resolution, though. For example, you aren’t going to see a varicella microbe with an unaided eye no matter how good the conditions are or how close you hold it. So, do we know how small is too small for the human eye? How small is just barely big enough, on average?

The closest you could get would be to measure the density of light absorbing cells on the retina of the eye. The only problem here is that density varies over the surface of the eye, and might vary from person to person. As I recall, resolving ability is very low at the corners of your eyes, not allowing you to identify much more than motion. It rises quickly towards the center of your eye, then drops back down again in the very center where your optic nerve attaches to your retina.

The Photographic Lens. Sidney F. Ray

There’s about 120 million rod cells in the eye. So “real life” is 120 megapixels.

If you wanted to make a wrap around screen that filled the entire field of vision, and had the same resolution as “real life”, you’d need 120 megapixels.

I did an experiment once. I filled up mspaint with black and put in a single white pixel (I tried the other way but your much better at resolving white on black). I then moved back until I was unable to distinguish the white pixel. As i recall, I was about 9m back when it disappeared.

Then it a simple question of mathematics. My screen has a resolution of 1400x1050 is 15" (diagonally). since its a 4:3 ratio, that means my screen is 12"x9". that means each pixel is 0.0086 in or 0.02 cm.

I was 900cm away so that means I can resolve tan(0.02/900) which works out to be roughly 4x10^-7 degrees. Assuming a 130 degree vision, we would need about 3x10^8 or 300 million pixels.

However, white on black is the absolute WORST case scenario. With white on black, I barely achieved half as much and with a chequerboard pattern, It turned solid gray after only ~ 30cm. Also, the middle of the eye is far more sensitive than the edges. If you can up with some sort of adaptive optics which only rendered in high resolution where you were looking, you could also get away with much less data being pumped through. doing funky psychological stuff like deliberately blurring edges means you are able to create an even more convincing image with less pixels.

Realistically, I would say that if they managed to cram maybe 2 - 3 times more pixels in the same area, I wouldn’t be able to tell the difference between a picture and reality, providing they managed to get other things right like light levels and reflectance.

Oops, I meant to say with black on white, I only managed half the distance.

** ticker:** Thats interesting, since my pixel width is 0.2 mm, I whipped out ms paint again and tried to do it. I had at least a meter distance before the two lines melded into each other. Interestingly enough, it didn’t matter if it was white on black or black on white, resolving distance was the same. Does this mean I have twice the resolving power of a normal person?

How many rods are in an eagle’s eye, or a hawk’s? They could make out a mite on the back of a tick on the back of a distant elephant…

A further reading of the book offers this:

and

all of which leads me to think that all figures should be treated as ballpark only, and to be wary of comparing those from different experimental conditions.

The number of pixels needed is very much less than 300 million, because you only have good resolution in that part of the eye called the Fovea - right in the middle,
(actually just a small distance from the blind spot )
the rest of the eye has poor resolution (try reading a newspaper without moving your eye or the paper)

I think that the limiting factor is often the resolving power of the lens/iris, which is affected primarily by diffraction. You might expect that the receptor density would roughly match the lens resolving power simply through the normal economy of natural selection. i.e. the number of rods in a retina is a result rather than a primary cause of resolution of the eye.

It seems to me as though the experiments you guys are trying are going to give better results than real life. I, for one, can distinguish things longer if I watch/see them as they get farther away. This is especially noticeable if I see an animal move briefly in some undergrowth and then stops. Admittedly, my eyes aren’t all they could be, but still they are decent (I can drive w/o glasses).
I would guess that if you started farther away and moved towards the screen, you’d find that you had to be closer to distinguish the single pixel. Better still would be if you had a friend who would open and close the paint window, so you wouldn’t be homing in on it as you walked forward.

Don’t know about anyone else, but my eyes are roughly spherical, so they don’t have any corners. :smiley:

What if the monitor was just a clear pane of glass? surely that would have the resolution of real life. And instead of “how many polygons would make up a real life elephant?” you could just have ‘a real live elephant’ wandering about behind the glass monitor (or window if you will).
Man what an awesome trip that would be.
And it can be done too for only £4000 per fortnight through Kenyasafari.com. All the Land Rovers come with real-life monitor technology fitted as standard.

“Reality is 50 million polygons per second.”
– Alvy Ray Smith

Foveon recently made a big splash with their imaging sensor technology. However, an article quoting one of that company’s founders had this to say several years ago:

It would seem to me that a “real life” image would have to contain all the information that the fovea can receive in every single sector of the image, because the eye might scan any portion of the image rather than simply look at it dead-center. I think that means that a life-quality photo would have to contain many, many more times information than the fovea can possibly take in at any instant.

You could have a sensor keep track of the position of the fovea, and do high quality rendering only in those portions of the display the fovea is scanning. It would depend on the relative cost of the sensor system vs the cost of the display system if this method would make economic sense. I imagine at some point it won’t really be much more expensive to add a few more hundred megapixels to get fovea-quality images over the whole image.