Pixel depth in human vision?

Barring the astounding computational processing at the front end, and at the visual cortex, and that it’s really an analog process, all of which makes a kind of mockery of this question [;)]…

Were the human eye a simple digital interface, what would its pixel depth be?

I.e., how would God advertise it using the the standard digital camera spec, “The new ‘Eye’: It has x.x megapixels!”?

The number I usually seen thrown about is 576 Megapixels. Here’s an official looking cite. Obviously, as you note, it’s not quite that simple given the differences between how the eye works and how a digital sensor works, but that’s about as good a number as you’ll get.

As the eye is very far indeed from being a “simple digital interface” any such number is essentially meaningless.

I don’t think it’s meaningless at all. This Wikipedia page on cone cells claims the human eye is equipped with 4.5 million color-sensitive cone cells and 90 million black-and-white-sensitive rod cells.

As has been noted, the visual system is more complex than just a digital camera’s CCD array. The rod/cell count is a useful first approximation, but it’s probably more meaningful to ask about the smallest level of scene detail that the human visual system can detect. Pulykamell provided a link; this page contains the same basic summary, along with far more detail.

Plus, you naturally take in more of a field of view than your eye “natively” covers, as your eyes move around lots and your brain pieces it all together.

“Pixel Depth” is not the right term. You mean “Pixel count” (or as it’s currently used - resolution).
Pixel depth refers to how many bits per pixel each pixel has, and relates to dynamic range and color resolving ability. Studies have shown that the eye can distinguish between 1-7 million colors, implying a pixel depth of 20 - 23 bits.

Cool, I have wondered the same thing and didn’t know how to word the question. This is the reason I read the SDMB. When we can buy tiny 600 megapixel, 3D, 24 bit color streaming video cameras with processors to reconcile dual images, then I may think about giving my eyes a rest.

Well, you have to consider dynamic range, too. The eye’s dynamic range is quite a bit larger than a digital sensor’s, but I don’t have the numbers for it. I’d say we’ve gotten to the point in photography that megapixel count isn’t the most important improvement for sensors, rather improving dynamic range is (or at least is right up there in terms of importance.)

Good luck finding an adapter cable, though.

It is if you work in the marketing department. But if you actually want to capture good quality digital photos, maybe the first thing to do is reduce the pixel count a bit from the current state of the art.

I agree with compact point and shoots. Not so with professional dSLRs, though. High pixel count files from 18MP+ pro cameras look damned good and detail-rich. The only problem is they tend to have worse noise characteristics at higher ISOs due to the smaller photo sites.

Accurately quantifying a human sense such as sight is nearly impossible. While the biological processes of the eye may be well understood, how the brain processes and compiles the information that is collected is speculative at best. The fact is, there is a huge difference between what you perceive as sight and the information your eyes actually collect. The optical properties of the human eye are actually very poor. What you “see” is an amalgam of newly acquired information and memory; mostly memory.

As far as just the eye itself, a technical treatise on the subject can be found at:

http://www.swift.ac.uk/vision.pdf

:smack:

The abstract:
The Limits of Human Vision
Michael F. Deering
Sun Microsystems
ABSTRACT
A model of the perception limits of the human visual system is presented, resulting in an estimate of approximately 15 million variable resolution pixels per eye. Assuming a 60 Hz stereo display with a depth complexity of 6, we make the prediction that a rendering rate of approximately ten billion triangles per second is sufficient to saturate the human visual system.

One factor that makes this complicated is that the sensor cells aren’t uniformly distributed on the retina. We can see a lot more detail on something near the center of our visual field than near the edges, and our eyes are constantly moving around to sweep that small high-detail region across a scene, to construct a detailed picture of the entire scene.

Man, you gotta love an apples to oranges sorta comparision thats accurate to 3 significant digits :slight_smile: