Film v Pixels v Eyes: Will cameras ever approach the eye in light sensitivity?

I’m not sure if I’m phrasing this correctly at all, but it’s a questions that’s come to mind a lot with the switch to HDTV. Because HDTV offers so much more detail, there’s a lot of scrambling to change sets, etc.; I’ve even heard that TV anchors and such are getting more plastic surgery because they won’t be able hide under makeup as well.

But it seems to me that folks wouldn’t worry so much if cameras didn’t need such bright, unnatural lighting to work well. Hell, put me in that kind of light and I’ll look whiter than Michael Jackson. Is there any hope of getting cameras that work more like the human eye in dealing with lower light settings?

The human eye is a quite capable contraption but compared to modest still and video digital cameras its performance is pretty abysmal. What gives the eyes such good overall performance is massive signal processing done after image captue. Electronic moving images are basically a series of still frames. Human vision is much more complex as it combines current information with recent information and the fact that the eyeball is almost constantly moving to give us a final image that looks a lot better than the raw performance of the eyeball.

You misunderstand why bright lighting is used. It is not used to make anyone look “whiter” but so that image sensors can be operated at lower sensitivity. This improves signal to noise ratio. Lots of light coming in the lens overpowers the inherent noise in the electronics. The final exposure value is correct so dark thinks look dark.

Bright lighting isn 't what reveals details and flaws. I could shine a 25 watt bulb on your face and it would reveal every pore, hair and zit. I could also place a 640 joule studio flash in a softbox and get a soft, low contrast image. A softbox is a dohickey that looks like a dome tent. The “floor” is translucent white fabric that diffuses the light. The single bulb is a point source of light but the softbox is an area source that softens shadows and detail. The one I use for portraits is 36"x48".

Sure there is. However, based on the development of CCD’s and such, and lenses, and the general speed of electronics, I doubt it’ll happen for another decade, at the earliest.

Padeye got it right when he said that your eyes do a lot of, uh, “post-production” work. If you hooked your eye up to a dumb signal processor, you’d find a lot of holes and gaps and smears and crap… BUT your brain is capable of knowing what to expect in an environment, and can judge the (approximate) relationship of objects in space enough to fill those gaps.

Your brain is like a porno magazine… it “airbrushes” everything to make it look better for you. :smiley:

Consumer level camcorders work at low light levels but as Padeye has stated, the noise levels are pretty high and the image is full of junk.

Also consider that news anchors are generally shown in head shots, which is like you sitting a foot away from someone. From a foot away, I can definitely see how much makeup a person is wearing, whether that hair is a rug, if there is gunk in the corner of their eye and generally more detail than I really care to see.

Hum. I’m feeling a bit thick here, but from my purely amateur photog’s perspective, if I take a pic of my boyfriend indoors, in which he looks perfectly normal and I can see him just fine, thankyouverymuch, my choices are either light him up like a candle or settle for a long exposure time in which he stays very still. So it seems to me that my eyes are much, much more capable of functioning in lower light than the film/pixels are.

Oxy you could purchase faster film. So you would not need as much light.

You are talking about a digital camera, right? (CCD detectors are more sensitive than the human retina, but film isn’t.) And you are using higher gain settings (higher ISO setting)?

If it still appears to be less sensitive than your eyes, part of the reason is that consumer digital cameras have really tiny detectors. Even though the detector itself is more sensitive than the human retina, the small sensor size (and the small size of the matching lens) hurts sensitivity.

Also don’t forget that the human eye has a very good tracking system that can compensate for movement. Some cameras compensate for camera shake, but the human eye can also compensate for the movement of the object.

Another reason, I believe, is that we have higher expectations from images than from our own eyes. Our brains are familiar with the limitations of our brains so we don’t really notice their poor performance in dim light. For example, in very dim light our eyes cannot see color, but when’s the last time you noticed this? On the other hand, photos are closely examined under good lighting so it’s easy to notice the reduced sharpness, increased noise, increased grain, etc.

The thing about the eye vs. film is that the eye/brain system is much better at processing contrast than the lens/film system. We automatically compensate for all but the most severe contrast situations. For example, I’m looking out my back door right now. Outside is “properly exposed”. I can still see detail and colour in my dim living room (I don’t have any lights on yet), and I can see details in the shadows of the trees beyond the light-coloured shed out back.

If I were composing a scene for a film (yes, I still shoot 16mm film) I could adjust the exposure for the interior, or I could adjust the exposure for the back yard, or I could expose for the shadows beyond. If I exposed for the back shed, it would (obviously) be properly exposed. But the interior and the shadows in the trees would be black (or nearly so, with very little detail). If I exposed for the interior, the trees aould be slightly over-exposed; but the shed would be “blown out”.

So why the bright lights in a studio?

There is one plane where the subject is in focus. The depth of field is that range of distances in which things nearer or farther from that plane appear to be in focus. The smaller the aperture, the deeper the depth of field. But when you reduce the aperture, you need to increase the amount of light. By exposing newsies, for example, to large amounts of light, their desks and the wall behind them can also be in focus (i.e., within the depth of field).

So how does this relate to the OP?

The trick is to get the film to “know” that dark areas need to be brightened, and bright areas need to be dimmed so as to match the exposure of the subject. Our brains do it automatically. In order to do it with film or video, there would have to be some sort of processor that “masks” the brightest areas so that they receive less light, and to do the same to a lesser degree to the subject. This would leave the dim areas with realtively more light. You can do this to some degree already when you process still photographs in a darkroom. You can selectively “burn” and “dodge” parts of the image. I can imagine that a sophisticated algorithm can eventually do this with video. It may someday be possible (if anyone cares to do it – serious filmmakers just light the scene) to have some sort of “burning and dodging interface” similar to variable density sunglasses that fits between the lens and the film so that it can approach the contrast sensitivity of the human eye.

I think the “can’t get something for nothing” clause in the laws of thermodynamics apply here. Whenever you raise sensitivity with high ISO film, digital sensors or eyeballs image quality goes down. When you work with less light there are fewer photons to make the image. The result is less color, higher noise and grain. It;s literally a problem of too few photons so that the distribution is even. Add the inherent noise in the system.

One way to get more photons is to j ust make everything bigger. Scale up the sensor or film and scale the optics to match. This works in film and works in digital camera, both still and video. Most consumer digital camera use a sensor that measures 7.2mm and usually smaller on the long side. DSLR cameras have a sensor that measures at least 22.7mm. That means for the same number of pixels each photosite on the sensor is bigger. If you have proportionally bigger optics that means more photons for the same exposure value and that directly translates into cleaner images, particularly at high ISO values. Having fewer pixels makes images cleaner too. The Nikon D2H has only 4 megapixels but outperforms consumer cameras with tiny 8 megapixel sensors under the same lighting conditions.

Another factor is that film needs to be at least the pixel density of the most accurate part of the eye over all the image whereas your eyes only have it over a tiny, tiny part of the retina. You need about 500 MegaPixels of image to achieve this.

Now I’m understanding this - I kinda suspected it had something to do with how our brains process contrast, and I’m glad for the info on simple size of image, too. Bravo.

I hate to scream “cite” but how did you arrive at the 500 megapixel figure?

Slight hijack here:

I know that ISO means sensitivity when related to film and CCDs, but I have no idea what the letters actually stand for. Any dopers able to fill me in?

Nevermind- I found it myself. It means 'International standards organization"…and makes absolutely no sense in context. I guess the 20 million hits I got for ISO on google were not a false lead after all. Apparently sometimes an organization name can be a film sensitivity at the same time.

The organization isn’t named after the film sensitivity scale, that just distinguishes it from other standards for measuring the same thing. Film used to be labeled ASA (American Standards Association I think) in the US which used the exact same scale as ISO. The DIN scale, Deutsch Indutry Norm, is logarithmic with three units to every doubling or halving of film speed. I think DIN 21 was the same as ISO 100.

The most sensitive part of your eye can distinguish I think 5 arc minutes or some thing similar. Given a 180 degree viewing field, you arrive at 500 million pixels. I can’t be bothered with the calculations but its around that order of magnitude.

The rule of thumb I’ve heard is that the human eye has a maximum of 1 arcminute resolution. An 180 degree field of view is about 20,000 square degrees, so you need 70 million pixels to achieve 1 arcminute resolution.

Shalmanese, could you check your math (and mine)? The 5 arcminute number sounds unreasonably large (it’s 1/6 the diameter of the moon!), and at that resolution you only need 3 megapixels to cover 180 degrees. Where did the 500 megapixel number come from?

Thanks for that info. Your examples also point out another reason that cameras can’t “see” as our eyes do, a very wide field of vision. Fisheye lenses are not rectilinear - straight lines are not kept straigt except those radial to the center - because they have to project onto a flat surface.

Perhaps it was 1/5th of an arc minute? I believe 1 arc minute was for distinguishing either parallel vertical or horizontal lines but there are circumstances where the eye can resolve finer. I’ve done a couple of practical experiments (draw a single white pixel on a completely black screen and walked back until it disappeared) and I think I got around 500 megapixels worth.

Quick point.

A panoramic camera that created an 180x180 image at, say, 1 arcminute resolution would be vastly more capable than a human eye.

The human eye’s field of view is a lot less than 180 degrees.

Secondly, the resolutions you’re claiming only apply to an area of very roughly 5x5 degrees, the “central visual field”. Acuity falls off dramatically as you get away from the center. At the extreme edges of the field, acuity is almost a misnomer; there’s liitle there but color and motion detection.

The post-processing in yuor brain covers for a lot of that, not by creating data where one exists, but by leaving you “comfortable” with the idea that you really can’t see out there at the edges; you just think you can.