Is there a term for the degree to which a camera has the ability (or inability) to render all parts of a frame clearly? For example, sometimes the foreground will be rendered clearly while a bright background will be completely washed out. Or the background will be correct, but faces in the foreground will be dark and unrecognizable.
The human eye has the ability to see all of this, but most cameras don’t do an adequate job of capturing what the eye can see.
It sounds to me like you’re talking about “dynamic range.” Dynamic range refers to how wide of a range of luminance values a medium (whether digital sensor or film) can record and maintain useful information in it (i.e., not completely black or completely white.)
The problem is that the human eye is an amazingly complex mechanism that not only has cells specialized in different light levels, but they brain and eye work together to paint the scene, with the brain filling in the blanks as the eye darts quickly around the scene.
Because of this, the eye can see many more levels of brightness and darkness together. It’s kind of like comparing a toy piano with two or three octaves with a grand piano with its full keyboard.
We have to resort to tricks to get the photos we want: using special filters to darken only a portion of a scene; using flash to fill in dark shadows and compress the dynamic range; or using multiple frames taken at different exposures to build a single good frame–HDR photography.
Yes, dynamic range. Ilford came out with a black and white film years ago for managing this. It used color film technology, only instead of having different layers for different hues, it had different layers with greatly different sensitivities.
XP2 - I never really warmed to it, although I have not tried any of the newer versions (it is still made.) It did do what it claimed, and has a very wide dynamic range, but never had the snap and quality of a conventional B/W film IMHO.
As above - dynamic range - is the usual term for describing the limit of low to high capabilities. This is used in just about every sensor technology you can think of. It is a fundamental part of living in the real universe. More often you will see “signal to noise” which is an equivalent metric, and arguably a somewhat more complete capturing of the issue. Digital camera sensors may more often quote signal to noise than dynamic range, but there is a direct relationship.
Also, the sensors themselves have varying levels of dynamic range. While a cheap camera may only have 8 bits per photosite, high-end DSLRs may have 16 bits. Also, some sensors have high-sensitivity and low-sensitivity regions built right next to each other that can be reconstructed into a single HDR image.
And a general note: don’t confuse real HDR imaging with the tacky overprocessed “HDR” you see so often. This latter thing is a result of processing images that genuinely have too much dynamic range to look good into something that can be displayed on a low-dynamic-range computer screen. It doesn’t speak to the technology overall.
Learning to recognize and deal with these situations is an important part of becoming a better photographer - maybe you want to avoid harsh midday sun in favor of overcast days or sunrise/sunset light, or use flash to fill in shadows, or use a graduated ND filter to help balance the sky with the foreground, or digitally combine multiple exposures, etc.
Yep, and also learning what your sensor or film can do and how to expose it to capture the maximum amount of dynamic range in your photos. For example, I expose negative differently than slide, and my newer Nikons (D750 and D800) differently than older ones (basically anything before the D800 generation), not to mention how I expose JPEG vs raw (though I haven’t shot JPEG in years besides on my phone) because to eke out the maximum dynamic range is slightly different on all those mediums. That’s also why the whole “straight out of camera/no editing” stuff, I feel, is a bit of an arbitrary constraint. The camera doesn’t capture what the human eye sees–it captures a far more dynamically compressed version of it. Editing to bring out shadows and tone down highlights is, in a sense, “truer” to what the human eye sees than a completely untouched straight-from-camera file. That said, creatively, it is often useful to compress the dynamic range to help focus the eye on the important bits in a photo and wash out or bury in deep shadow the unimportant bits.
“Depth of field” is the term you’re looking for, but it doesn’t sound like what the OP is looking for, given they are talking about how bright or dark parts of an image are compared with what they see.
In some ways things haven’t changed since the days of film. The main one that film has a somewhat non-linear response (expressed partly in the contrast - gamma) that can be used to advantage, compared to digital sensors with a more linear response. But the grand master of the technical art of photography - Ansel Adams - and his Zone System, still gives the most complete way of thinking about the problem. The core point of the system was that given a calibrated end to end system (which for film meant camera, film, processing, enlarger, darkroom, paper, processing) you can derive of known transfer function that maps the light in the scene - via the chosen exposure to a given black level on the printed paper. By providing control of the contrast you could map two levels. That was it. But the zone system was all about giving he photographer compete understanding and maximum control at the moment the shutter was clicked as to the final “placement” of a lightness level in the scene in the final print. Adams taught that the final picture was decided at the moment the shutter was fired.
Digital workflows change this a lot in that the transfer function from the sensor’s dynamic range, to that of the target (be it screen, projected or paper) can be manipulated after the fact. However, this doesn’t mean the photographer is freed of responsibility. Mapping the desired part of the scene’s dynamic range into the smaller range of the sensor, remains important, as once the information is lost, you can’t get it back. This is why IMHO thinking in terms of signal to noise is more use now - as the noise in a digital sensor is the direct equivalent to film grain (in a very neat manner) and maintaining control and understanding of this is just as critical to a good result.
The human visual system can adapt to different brightnesses, but once the image has been captured on film or digitally, it’s almost too late to depict detail in over-bright or over-dark regions.
Only “almost” too late, because detail information may still be present in the least significant pixel bits. That detail can be made visible with Adaptive histogram equalization (AHE). In a previous life I played with image processing and found AHE to be very effective, but I don’t recall seeing it available in software like Photoshop. (Ordinary histogram equalization is available, but it is much less interesting and effective than “adaptive” forms.)
i respectfully disagree. HDR photography can be as simple as taking two photos of a sunset, one exposed for the sky and one exposed for the landscape, and combining them in a HDR app (or PS).
Those garish examples you are talking about are misapplication of the tool, IMHO, or intentional artistic choice.
My preference is to do this optically if possible, which is why I keep one of those half-dark, half-clear filters in my bag. But given the right scene, I’d happily set up a tripod and shoot several bracketed shots for later combination into one perfect image. Moving foliage messes this up, but it is possible.
I wonder what percentage of amateur photographers understand this–it’s all but impossible to take a picture of a sunset without ending up with an almost-black foreground, or blown-out highlights.
By the way, I used to shoot JPEG only since I was happy with in-camera processing. Then I did an experiment with some intentionally over/underexposed shots in RAW and was impressed how much can be rescued from RAW that would have been lost in JPEG. For me, RAW is an amazing safety net.
One area that only recently became clearer to me is color reproduction. I bought one of those color checker palettes and calibrated my whole workflow, and was amazed how many colors just can’t be represented well. More surprising still was learning how so many classic films get their beautiful look by all of the tweaking that they do to make imperfect color chemistry look good. A case where many compromises make one particular subject matter look great.
I use a set of tools called VSCO Film that can take a properly exposed portrait and make it look like one of many classic films, such as Kodak Portra 160. It does this by applying curves and custom camera profiles to drastically manipulate colors. It makes people look great, but my color calibration palette sure looks weird.
Sorry, I was unclear. Not all HDR postprocessing is tacky and garish. I just don’t want people to look at those images (some nice examples here) and think that they’re representative of HDR in general.
HDR, primarily, is just a way of getting images into the computer. One can use exposure bracketing or other techniques, but for the most part you want the light levels in this phase to be linear. You then have an image which is an accurate sampling of the scene, but cannot be displayed well on most output devices.
To remap an HDR image to LDR, there are lots of techniques. Some of these just match the optical techniques you mention. Others are more sophisticated, but have the potential for causing problems like haloing.
I suppose my point is that if these techniques are applied subtly enough, you don’t notice them, and therefore the photographer has succeeded. HDR works specifically when you don’t know that it’s been applied. It’s when you do notice it that it means the photographer has screwed things up, and I don’t want people to associate that with HDR done well.
I really hope that HDR displays become commonplace. With them, there’s no need for an error-prone remapping step. They’re coming but it remains to be seen if they succeed (while 3D and stuff failed).
Our technology is currently far behind what the eye can see. Take a look at this image. The curved shape is the space of colors that the human eye can see. The black triangle (sRGB) is the most common color space used by images and output devices. It’s easily missing half the color space. Adobe RGB is a little better, but very few monitors support it, and it’s still not great.
There are efforts to widen this color space, but they’re slow going. It requires changes in display technologies (some of which have downsides like increased power consumption). But display makers want to keep selling new displays, so they might be encouraged to stuff in new features.