Are photo ids racist ? (White Balance)

You are quite wrong about this. Without burning and dodging or photoshop autoexposure cameras will always have trouble with competing contrasts. It doesn’t matter if it’s landscapes, white people or black people. Given that the OP is talking about ID type pictures, it could very well be the backdrop color that’s causing the problem and not the camera or the operator.

As a former scientist, I learnt how personal biases can skew objective observations. The thing about bias is that the bearer is usually unaware of its existence much less it’s extent. Hence the need for double blind studies for eliminating bias from observations.

So to the photography professionals posting to this GQ thread I request : If most cameras were optimized for whiter skin tones and most of your photography subjects have been white, how have you eliminated bias when the statement is made : “Darker skin tones are hard to photograph accurately ? “

Your question in the second paragraph makes basically no sense.

First, as far as I can tell, only one person in this thread (pulykamell) has actually identified him- or herself as a photography professional. Most of us participating, I would venture, are not professionals.

Also, your use of the passive voice (“when the statement is made”) renders your meaning opaque. Are you asserting that we (the photographers) are claiming to have eliminated unconscious bias? Because I don’t think anyone in this thread has made such a claim. Are you saying that the design of the cameras and our tendency to photograph more white people is, by itself, indication of bias?

More generally, the very statement you are using as a reference is, in itself, a rather subjective statement. To the extent that I sometimes find darker skin tones somewhat difficult to reproduce accurately, I understand that this is product not only of things like camera optimization, but my own shortcomings as a photographer and a photo editor.

As I said above, I do my best, in the editing process, to ensure that skin tones are as close as possible to what I saw in real life on the day of the photography, and I think I’m often quite successful. But this is, in some respects, an inherently subjective process. If the subjects of my images looked at themselves in these pictures, some of them might say, “Hey, I don’t think my skin is quite that dark,” or “Hey, you lightened me up too much!”

I’m not really sure what your question is asking, or what sort of implicit accusation you’re making with your post. Maybe you could be a bit more clear.

Let’s theorize on a world in which everyone had dark skin. There is no doubt that cameras would be more optimized for a darker skin color. They would be more willing to overexpose other elements in order to get the skin tones correctly.

But we can’t stop there and lay it all at the feet of the camera makers. The photography lighting would be different, the default photo background color would be different. Everything involved would be slightly different to make photographing darker skin tones better. Can that be done, yep (as mentioned before, you can if you are a cinematographer who can control every variable). Is it easily or quickly done? Probably not.

However, every little bit helps. For a long time it made sense that camera manufactures selling cameras to North Americans and Europeans are not going to tweak their cameras to handle darker tones better by default. That means the lighter tones will suffer and the majority of people have lighter skin and they would not be happy with the product. BUT…with today’s technology, cameras are able to (many times) pick out faces and it could adjust its exposure to optimize them. That is not something that is trivial or easy. Many times it will get it wrong, like what do you do when you have mixed races with very white and very dark faces? However, it could be an option that they offer.

Every little bit helps.

I think most of this is probably due to the fact that nationwide, the country is something like 70% white even today, and as recently as the 1980s, was 80% white. Black people have never really exceeded about 15-16% of the population. And that’s not evenly distributed geographically. So Kodak probably made their Shirley cards to reflect the vast majority of photos that were developed.

Nowadays, the easiest thing would be to just have the camera set up with spot metering and some sort of software that would be able to extrapolate the correct exposure based on the skin tone. It probably wouldn’t be the 18% gray, but something a bit more sophisticated.
Basically the DMV person would line up the subject, look at the camera screen, tap on the person’s forehead on screen to show it where to actually meter, and let the camera adjust the exposure based on the skin tone. Or, they could do some basic post-processing similar to what DXO Photolab does with exposure and hope for the best.

I’m not a professional photographer but I did a bit of photography for others in pre-digital times.

Why don’t you start by checking the veracity of your conditional? Or is this another ignorant rant at supposed racism?

Most of the reason skin tones photograph differently has to do with how a camera determines exposure. This has been mentioned before. Now, systems get more and more complex and could and do take more information into account, but the basic nature of how exposure works is that the camera does not know whether it is taking a picture of a black subject or a white subject.

Let’s start with a piece of white paper and a piece of black paper. Photograph each in autoexposure mode such that the entire sheet of paper fills the frame. The result will be both pictures looking exactly the same shade of neutral gray, even using the most sophisticated, advanced AE algorithm.

The camera simply doesn’t know what it’s photographing, and it will try to expose everything to 18% gray.

So, if you have a subject with dark skin tones, especially with a dark background, the dumber the AE system, the more likely it will be to overexpose the shadow tones, because it is trying to bring out shadow detail in the image and achieve a balance where the brightness values average at around 18% in the area of the frame it is evaluating, or across several “zones” in the frame. It varies by AE type and manufacturer.

I would not say it is correct in this case to say the camera is biased towards white skin tones. It’s just biased towards getting an image that balances to 18% grey across the evaluative parts of the frame. The camera doesn’t know if it’s photographing a caucasian or a dark-skinned person.

Now, in the issue of color balance, is it possible that systems can be tweaked to bias RGB values towards more common Caucasian skin tones vs other ethnicities? Yes, it is possible, but I do not know if any manufacturers take skin tones into account using auto settings. I know that when I do my photography, I usually have to tweak Caucasian skin tones out of the box, because I don’t like them with the default factory settings.

And culturally, it can be strange. I’ve had clients ask me to lighten them up in cultures that prefer lighter skin (it’s not super common with my clientele, but I have had an Indian bride or two ask me to lighten up their skin tone.) So there is an amount of subjectivity as to what is “proper” skin tone for a subject.

Of course, within a studio it’s easy to standardize. Set a target white balance that stays the same for every subject; same lighting for every subject; same lens, same camera, same lens settings, and thus create a situation where, if there is some biasing in the exposure or skin tones, it is equal across all subjects. And then you have to standardize the printing process for this as well. But that seems like an unnecessary amount of standardization for something like an ID photo, which will look different depending on what lighting condition you’re viewing it under, what angle you’re looking at it, etc.

From the comments in the article I linked in post #28:

and

and

On further review:

I thought I remembered something like this recently. Indeed, you produced almost the exact same threadshit a couple of weeks ago. Since a note from engineer_comp_geek at the time didn’t get through to you, I’m upgrading this to an official Warning.

Yes, auto-stuff does things wrong for people with dark skin. That’s the whole freakin’ point!

Things are by default not set up to properly handle a large section of the population!

The solution is to fix things in the tech.

Right.

It seems that, at least in terms of the overall question about optimizing the camera, one of the OP’s biggest failings is an apparent misunderstanding of how exposure and lighting are actually dealt with by the camera’s metering system, and expressed in the subsequent exposure. There seems to be an assumption that you can just make some corrections regarding how the camera deals with skin tones, and everything else will remain the same, but that’s not how exposure works.

While cameras are getting increasingly sophisticated in their ability to deal with varying light conditions, especially contrast, if there’s one thing that I have come to appreciate as a photographer, it’s not the technology in the cameras; it’s the incredible ability of the human eye/brain combination to deal with a massive variety of lighting conditions. We can look at a scene with bright sunlight, deep shadows, different colors, and different lighting sources, and as we look around the scene our eyes and our brains make tiny, almost instantaneous adjustments (some of them mere inferences based on past experience) in order to take in and comprehend as much of the scene as possible. This is especially true for lighting-related issues such as the color temperature of the light sources, and the dynamic range of the scene. We can “see” the warmer light of a tungsten bulb, but we correct for it automatically. We can see that the deep shadow is much much darker than the bright sunlight, but when we look at the shadow we automatically adjust to take in the details.

Cameras don’t quite work like this, or at least not with such incredible speed and detail. For all of the massive improvements in camera technology, the fact is that, when you press the shutter, you have to reduce the whole scene to some sort of average or ideal value. You aren’t really able to make adjustments within the scene, at least not when to take the picture; you are recording it with a single aperture and a single shutter speed, which means that many areas of the scene will not be rendered in a way that resembles what you see with the naked eye.

This doesn’t mean that everything looks bad. One area where digital photography has improved immensely over the last decade or so is in that area of dynamic range. What this essentially means is the amount of lighting range, from bright highlights to dark shadows, that the camera’s sensor is able to capture while still preserving enough detail for later adjustment. For example, Nikon’s last DSLR, the D850, has a dynamic range of about 14 Exposure Value (EV), or 14 “f-stops” of light. Each EV or f-stop doubles (or halves, depending on the direction) the amount of light. So if your scene has a range of 14 EV, it means that the lightest part of the scene is 2[sup]14[/sup] times as bright as the dimmest part. That’s a range of about 16,000 to 1.

But while a modern camera like the D850 might be able to photograph a scene with a dynamic range of 14 EV, it won’t render all parts of that scene as the eye sees them. What the excellent dynamic range means is that, when the photographer puts the digital picture file onto the computer, he or she can then use software like Lightroom to process the picture, bringing up detail in the shadows and bringing down the highlights, or whatever is neceesary to render the scene in the desired way. It’s worth noting that this works far, far better if you use the RAW files from the camera, rather than the JPEGs. The RAW files record the data as it hits the sensor, leaving as much data as possible and allowing a much greater ability to process the image in a wide variety of ways.

Of course, the OP was asking about cameras, but was also asking specifically about ID photos. This changes things in a couple of ways: on the one hand, the cameras used for ID photos are often much less sophisticated than the D850; on the other hand, the specific photographic conditions for ID photos are generally much more consistent and straightforward than a typical scene. As bump notes, above, it would probably be relatively easy to set up an ID camera system to expose various types of skin tones pretty consistently and accurately.

The ID/lisence photo cameras are likely lowest-bid contractor-quality hardware, poor lighting, and an almost entirely untrained employee taking the photo as quicky and indifferently as possible. And–something not mentioned yet–the printer for the photo is probably built with speed and throughput prioritized over quality, too. Nobody in a “stand the person against a nearby wall, click a button and say ‘next!’” setting is going to have the skill, equipment, or time of a professional studio photographer.

I should note, by the way, that ID photos don’t just get dark-skinned people wrong.

For quite a while, until we had new pictures taken for new licenses, my wife and I (both of us pasty Anglos) had our skin tones very poorly reproduced on our California drivers licenses. My wife’s face had an odd green-tinted cast to it, and my face had an orange hue just a bit lighter than one of Willy Wonka’s Oompa-Loompas.

Which film was that? I never saw any color film that claimed better results for different races. Since two of the biggest film makers were in Japan, did they have export versions of their films? I found that film bought in Japan was fine for all faces of all races, subject to getting the exposure right.

Regarding black faces, I have seen reports that the face recognition software generally used is not as accurate with black faces. Is it the skin tone, or the facial geometry? AFAIK, the programs work on the latter. And the Chinese have systems that work just fine in the PRC, so perhaps there is some tweaking for a particular race.

I’m not pastie by any means, my normal skin tone is a bit like a white person who tans a little and almost got sunburnt yesterday.

Yet most of my IDs I’m either completely washed out so there’s no face there or I look like George hamilton.

I think that ultimately VERY few people get a good looking drivers’ licence or ID picture. Most are either underexposed or underexposed.

I think a lot of it is that the cameras are cheap and idiot-proof, and intended to be fast rather than accurate. It’s been most of a decade since my last driver’s license photo, but I remember it being basically “Go stand on that line- look here (points at some target on the camera) and smile!” I ended up kind of underexposed, and I’m super-white.

That said, when I got my latest work badge, they used what looks like a webcam to take my photo, and it looks fine. And so do all my black co-workers’ badge pictures. I suspect that the Texas DMV camera back in 2009 was probably the model they bought when they first switched to digital photos and was just old and crappy.