Photography help needed. Why do the colors from emitted light come out wrong in my photos?

This weekend a family member got married and I was the unofficial photographer at his wedding. For the most part the pictures came out as expected, certainly not professional quality, but the couple were happy with them. The main problem I have with how they came out was that the colors from the stained glass windows at the church where the ceremony was held all seem to be different than what they actually look like. They look less brilliant than what they actually loook like. None of the other colors seem to have been different, including the white of the brides wedding dress. I’ve had the same experience in the past taking pictures of other colored light sources such as Christmas lights or neon lights. Is there a reason these colors always seem to come out wrong? Is there a way to fix this issue? I was using a Canon 5D camera. Thank you all for any answers you may have.

Photographing light sources has considerations that are different than photographing reflected light. Were the windows the subject of the photo, or the background? If they were the subject, and if you were were on Program mode, they were probably underexposed. When photographing light sources try overexposing, or bracketing with the lowest exposure set for what the camera thinks is the correct exposure.

The good news is that you can correct for most of this in post-processing especially if you are shooting in RAW mode.

We used to worry about reciprocity failure on film when photographing light sources, but I don’t know if that’s an issue on modern sensors.

It’s hard to know what the problem is without more details. Color can be affected by the white balance, aperture, exposure time, ISO settings, any filters you may have been using… if you were shooting a wedding, you probably got a large number of photos. Maybe go through the EXIF data for a photo that you liked the colors on and one that you didn’t like the colors on and see where the settings were different? Preferably if the photos were shot in the same location - it would be really difficult to compare a photo shot indoors from one shot outdoors.

We rely on different light sources throughout our days, including sunlight, incandescent lights, LEDs, and flourescent lights. They all have different color temperatures and they all tend to add a tint to the things we see. We don’t notice very much because our eyes automatically adjust to the color temperature of the light.

If you took a picture with film, the lab that does the printing will do color correction with appropriate filters. Nowadays, digital pictures don’t rely on film labs so the digital cameras automatically apply color correction algorithms to try to make your recorded image look the way you remember it. In most lighting conditions, the cameras do well but in some tricky lighting conditions, they fail. Stained glass windows are a case where the camera’s color correction algorithm will struggle particularly hard. As CookingwithGas noted, shooting in RAW format and it will allow you to correct the image as you choose with photo editing software.

Then, there may be differences in the way that your output medium (whether printed or on screen) presents different colors from the recorded image.

Here is a good explainer.

The core problem you have is that there is no extant technology that can get a printed page to attain anything like the colour gamut the human eye is capable of perceiving.
The eye sees a range of colours and the eye-brain perceives colours extending right across the visual spectrum. It also perceives non-spectral colours made by mixing light from the spectral colours. The trouble is that there is no easy way of reversing this process. To get the eye to see the original colour you need to be able to stimulate each of the three sensors independently, so as to get the same RGB value the camera saw into the brain. But the eye’s sensors overlap, and you can’t perform the sensing trick twice. Either you do it in the eye - when it sees the scene for real, or you do it in the camera. But there are no inks or computer screens that are able to give you the ability to deliver light to the eye in such a way that it perceives the colour in the way it was in the first sensing.

Print out a set of pure red green and blue. They are as red, as green and as blue as the print process can ever get. They are limited by the composition of the inks, the paper, and the viewing conditions to be the best you can ever get. You can’t ever print a deeper red, green or blue.

But someone making a coloured glass is not so limited. Ruby glass filters light to a very deep red, and has a peak wavelength much longer than the puny red dyes your printer uses. No matter what, when you print red on the paper, it reflects some light at shorter wavelengths, and that limits the gamut of colours you can represent. Same problem with any spectrally pure colour. The inks are leaky, and you get a spectra of light reflected back to your eye that is a mess of wavelengths, which makes it impossible to get vivid colours.
You can get close to the eye’s colour gamut by using monochromatic light sources that are carefully chosen to have minimal leakage between the sensors - you sometimes see laser TV technology talked about. These can provide a much wider gamut than any other technology, but still short of what the eye natively can perceive.

You can probably find a CIE colour gamut diagram for your printer. It will have a depressingly small triangle of colours that the printer can reproduce inside a much larger enclosing curve of the colours you can see natively. Realise that this little triangle is all you every have to work with.

In a word - no.

Your camera will have done as good a job as you could hope for. The information will be encoded in the sensor data it created. But as above, getting it back again is extremely difficult.

You can improve the color by editing the photo in Lightroom (paid) or a free photo service like Google Photos. Play around with the saturation, contrast, white balance, etc.

You can try to fix things in post from the raw files, but sometimes you just have to embrace the look.
Even at best, photography is a whole lot about optimizing the inadequate sensor or film in a way that creates pleasing photographs. For an egregious example of this, take a look at how film was in the old days, when Shirley Cards were used to tune processing equipment: the chemistry was all nicely aligned to reproduce white skin in beautiful tones, but failed miserably on African American skin.

Modern sensors are subject to limitations as well (though a bit more subtle), and the manufacturers do their best to account for these in post processing of the raw image. Look at a raw image in an editing tool and it will seem disappointingly bland–that’s because none of the tweaks have been added.

It has been my experience that certain fabric dyes seem to reproduce terribly, possibly as a result of differing sensitivities to near UV light. I remember photographing one woman who was wearing a nice purple dress that appeared blue on my camera. We both puzzled at that for a few minutes, and I tried some additional shots, with the same results. I can easily imagine that direct light through a stained glass window would suffer from a similar effect.

One cool result of the approach used to optimize film sensitivities back in the day is that the same kinds of optimizations (and desensitization) can be applied to a raw photo. I use Lightroom presets from a company called VSCO that allows me to emulate the look of many classic films. I like the various portrait films, but one of my favorite looks is the Kodak Gold 100–it’s pretty neat how a quick click of a preset makes a photo jump out as “1980s”

Unsolicited tip for anyone called for such duty:
Hold the camera low, especially if you are a tall guy. Portraiture of women is best done with the camera held at chest or waist height (waist height is typical of model shoots). Photos of children are way better when the camera is at kid level.

My 5D does an excellent job of picking the right white balance automatically, and I’m pretty experienced adjusting white balance in Lightroom if needed. But a co-worker’s wife had trouble photographing a mauve-ish hair clasp (she’s a glassworker) and getting the color right, so I said “no problem”. I put it in a white product-photography tent and illuminated it with daylight-temp strobes, and…had the same problem.

I spent two hours in LR with the piece next to me, and I still couldn’t get the resulting image to match the item to my satisfaction. Some colors are just weird, even in carefully controlled circumstances. An illuminated stained-glass window sounds like a nightmare for photography.

WB is a challenge in event photography because there are often multiple sources of light in varying degrees across the photograph.

In a church you can easily have sunlight in the same frame as light from overhead incandescent lights, as well as a smattering of LED lights backlighting the cross on the wall, and your own flash in the mix. That photograph will be a sweet pain in the backside to balance, probably requiring careful masking and tweaking of different areas in order to get it to look reasonable.

Intriguingly, there is an article on getting better color in digitial cinema in the just-released issue of Optics and Photonics News, which you can access without a subscription here:

https://www.osa-opn.org/home/articles/volume_29/may_2018/features/cutting-edge_cinema/

As others have mentioned, part of the problem is the gamut of colors available. As the article points out, it’s getting better.
But the entire issue is very complex. I was surprised to learn that there really is no standardization in the spectra of the RGB separation filters, and when you reproduce the image, either on a monitor or a print, the output isn’t standardized, either. There’s an awful lot of latitude in what constitutes the “green” signal you receive and the “Green” output that is produced. Our eyes and brains are rather forgiving in reconstructing the color, but there’s still a lot of variation*. If you want an accurately reproduced color image, take a Lippman photograph** and be certain that your reconstructing source closely matches the one you took the image with.

*Look up the Land Effect sometime. – The Land Effect

**Lippmann plate - Wikipedia

Without seeing the photo, I wouldn’t be able to tell you. What I’m guessing you’re seeing is that the stained glass colors look very washed out to what your eyes see. The reason for this is there’s a huge exposure difference between the stained glass and the bride & groom. If you expose properly for the bride and groom, the stained glass is going to be way overexposed. If you expose for the stained glass so the colors come out rich, the bride and groom will be a shadowy blob. (You can deal with this by using flash and knowing how to expose to balance foreground and background.)

I’m assuming that’s all you’re seeing and it’s not some post-production or calibration issue. This seems to be the most probable explanation. There also will be white balancing issues involved (as if the bride’s dress is coming out white in your pictures in an indoor environment with incandescent lighting, then the stained glass will be registering as much bluer than it actually should be, but I think that’s a side issue.) That said, without seeing the photo, I can’t say for certain.

pulykamell, wedding photographer for, oh, 13 years now, and editorial photographer for 7 or so years before that.

Former and sometimes current pro photographer here:

OP needs to go into Photoshop, select all the bright areas in the stained glass windows - the whole area, preferably; go into the selection menu and modify -> feather, a good enough number of pixels to make the exposure adjustment soft and natural looking; and then pull highlights. I.e., go into adjustments - > shadows and highlights; make sure the shadows are dialed back to 0, and then move the highlights slider until a sufficient amount of color in stained glass windows is revealed. If you overexposed, you may have blown out the highlights too much to do this.

Don’t let your camera’s meter get fooled into overexposing or underexposing by large spaces of darkness or light in an open area.

It’s almost certain that the highlights are blown; there may be some recovery if the OP goes into the CR2 (raw) file, but even that is unlikely. If the bride and groom are correctly exposed, and I am imagining this picture correctly, the stained glass window will be well outside the camera’s dynamic range.

Pretty sure pulykamell is right. I’ve shot stained glass before and it generally needs to be way underexposed to prevent the lighter glass from being blown-out white. Even in RAW it’s difficult to get it right because there’s such a wide range between light and dark. Plus getting the white balance right is tricky because all the light coming in is colored. The place to start would be the color of the sun at that time of the day and go from there.

Anyway, Photoshop can’t pull highlights out of a JPEG image that’s already hit 100% white. Once it’s white it’s gone. It can tone down anything that’s close, but that doesn’t help much. Bringing up dark areas is much easier, but you get more grain. Even with a RAW image and Adobe Camera Raw, Lightroom, Capture One Pro, or any number of other RAW converters, they can only bring back highlights by a certain amount, like about 1.0 EV. Excessive highlight recovery can also lead to loss of color in those areas, leaving a kind of a gray muddy appearance that’s no good either. I see that with people wearing fluorescent high-viz yellow or green jackets which are always over-exposed due to the nature of fluorescent colors in general. If I try to recover the highlights too much those areas become sort of a desaturated piss yellow and would then need some manual editing.

This is no easy problem. Fixing after the fact can only go so far. I agree with others that a strong fill flash (preferably remote, or otherwise a separate speedlight that’s mounted high above the camera for a better angle) is the best option for getting it right from the beginning. That takes some practice, manual tweaking for every different scene, and it significantly delays any burst shooting, so it might not really be worth the trouble without a crew of other people to fill in and get the rest of the shots.

Using a flash or multiple flashes with a wireless trigger(pocket wizards) will help in these situations. It will allow you to light the subjects so that the amount of the light coming through the window is close to amount of light hitting the subjects.

One trick I sometimes use in these situations is to simply take two successive photos, one exposed “properly” and another for the highlights and combine them in post. This is good to know when you’re in a situation where you can’t use flash (like during most church wedding ceremonies) or simply don’t want to use flash (even when I have the option, I personally almost never use flash during a church ceremony–only during processional, recessional and any altar portraits afterwards.)

Also, I’ve discovered that the generation of Nikons starting with the D800 onward (I use D750s mostly now) have incredible dynamic range, such that you can expose for the highlights and recover shadows maybe 4 stops or so under without crazy amounts of noise. I have no idea how they do it. The way a RAW file is structured, it should be noisy as hell that far left in the histogram, but even shadows that look like pitch black on the back of my camera I can recover with a +3 to +4 exposure adjustment. It’s nuts.

I do want to point out that the inability to recover color data doesn’t inherently mean the images can’t be fixed. In theory, of course, anything can be fixed, as long as it is within the color gamut, and you can always modify things to fit in the gamut. In practice, of course, this is usually not possible.

Still, the OP seems to indicate that they have some access to what the colors are actually supposed to look like. As such, they might be able to recover enough data that colorizing would “work” (i.e. give acceptable results). If they can get access to another photo that doesn’t have the problem (or can go take one themselves, with the adjustments mentioned here), they can really go to town fixing it, just cutting it out, using perspective tools to warp it and overlap the bad one, and then adjust until it fits the image.

As for the mauve hairclip: I would guess it is actually violet. Camera sensors don’t recover violet properly. That said, that should definitely be fixable in post–add enough red to compensate. If that’s not the problem, I don’t know.

Yes, but “blown-out white” is an extreme and obvious case. The problem is with brightly colored subjects - it can cause one of the color channels to saturate (over-expose) even when the other channels don’t. If the red channel is saturated while the green and blue channels aren’t, the result is a less intense red color - because the R value is clipped while the G & B values are not.

This is why many cameras have RGB histogram displays. It makes it obvious when one of the color channels is saturating.

Indeed. However, if you’re shooting raw, you do have to be careful about those RGB displays, as you may have a bit more leeway than the histogram is leading you to believe. The histogram is based on the JPEG preview, so it will show channels clipping before they actually do in the raw file. I find that with my cameras, I generally have about 2/3 to a full stop or so of highlight headroom before the raw file’s channels are saturated. There were rumors a few years ago of the D800 and then the D810 getting a raw histogram in the firmware update, but they never came in to fruition. In the meantime, for Nikons, the best way to approximate this is just use the “flat” picture profile. Makes pictures on the back of your camera look, well, flat and yucky, but gives you a better idea of what information is in the file.

Flowers too. There are a lot of flowers (especially blue flowers, like bluet – Houstonia caerulea) that are also difficult to photograph – the sensor in the camera just doesn’t seem to capture the blue in the same way the eye does. But I often have trouble getting reds correct as well.