Film v Pixels v Eyes: Will cameras ever approach the eye in light sensitivity?

Yes, but you would need to store all of this data inside the image even though you can’t see all of it at once. When you look at a picture, you eye starts flicking to several points of the picture at once. Each one of these points needs to be of equal resolution as the most accurate part of your eye otherwise the picture looks grainy.

Re: the brain as “editor”.

Look straight ahead. Now flick your eyes to the left. Now flick them to the right. Notice anything? Unlike a camera when it pans, you don’t see the “smear” as your eyes traverse quickly from left to right. Your brain “edits out” the “smear”, leaving you with just the starting and ending images.

The pixel comparison is a fuzzy one becasue we don’t have detail vision over our entire field of view. We can see nearly 180º but that is only motion and not detail vision. Stating that a comparable camera needs detailed information over that whole span is only to make up for lack of signal processing our brains would do.

I often get into debates in photography forums over “normal” lenses. People often have the mistaken belief that a 50mm lens on a 35mm camera has the same field of view as humans. I have to laugh at this because anyone with that narrow a field of view couldn’t get a driver’s license. Cameras do not see as the eye does and the best we can do it make sure that angle of view and perspective doesn’t call attention to itself if we want a “natural” image.

Actually right now don’t the cooled CCD’s in the big astronomical telescopes almost register individual photons?

And the signal to noise ratio can be improved by integration. That is, if there is a galaxy at a certain location in the field of view it delivers a photon, or several, in every sample. On the other hand for any particular location sometimes there is a noise signal resembling a photon and sometimes not. If a computer keeps adding the photon count for all locations, photons are added for every sample where the galaxy is but only in part of the samples in the case of noise signals. So the signal to noise ratio improves with an increasing number of samples by the square root of the number of samples taken. Such a technique used to be complex but with computers it’s pretty simple. In fact I have seem home astronomer’s pictures of galaxies that are fully equal to those taken by Mt. Wilson in the 1940’s and 50’s.

Another thing not mentioned is that even if we have the equipment to take pictures at that resolution, we don’t yet have the technology to display anything remotely close to the resolution and colour range of a natural scene.

Not quite, but we are getting close. The best CCD detectors have quantum efficiencies above 50%, meaning 50% of incoming photons produce a measurable signal (i.e. average of 2 photons needed to produce 1 electron). Noise can be below 2 electrons per pixel. So if you require signal-to-noise ratio of 3 for a detection you need 6 electrons, for which you need 12 photons.