Okay, yeah, it’s purple, not violet. But it looks pretty much the same as violet. Why? Violet is a shorter wavelength of light. Why does combining blue light, which is a bit longer, and red light, which is longer still, somehow appear to be the shorter wavelength?
Or does it? Is it that violet light can’t be seen very well, but for some reason activates our red and blue receptors?
I saw this thread, and it reminded me that I’ve always wondered about this.
Take some waves here with a certain frequency, and some other waves over there with a slightly different frequency, mash 'em together, and what you get is a wave with a higher frequency (shorter wavelength) because the peaks of each don’t line up. This holds true whenever you add colored light sources: green + blue = cyan; green + red = yellow; red + blue = magenta (or violet or purple, whatever); red + green + blue = white.
Not really. What you’ll get is a wave with a frequency between the two, with amplitude modulation on it at a very low frequency. This amplitude-modulation frequency, called the “beat frequency”, will be the difference of the two input frequencies.
Hoo boy. This is hard to explain without pictures.
Color is a product of three different receptors in the human eye. You can think of them roughly as red, green and blue receptors, but actually they each respond to a range of different wavelengths in a non-linear way.
Every color that exists can be thought of as a ratio of RGB stimulus values. If the red and green receptors are firing a lot relative to the blue receptors, that’s yellow, for example. You can represent these ratios with a triangular graph with maxed out RGB at each of the corners.
If you plot out pure wavelengths of light on this triangular graph, they’ll trace out a U-shaped arc. The arc will be contained INSIDE the triangle. Because the color receptors in the eye have overlapping response curves, you can’t stimulate one channel without stimulating the others at least a little. The reddest red light is not as red as the red you would see if your red receptors could be made to fire all by themselves. But even the reddest red light makes the green receptors fire just a little.
So, to answer your question. Pure violet light makes your blue receptors fire. But it actually triggers your red receptors a bit as well. Which means it looks very much like a mixture of pure blue and pure red light.
Google “tristimulus curves” and you can see how the different receptors respond to different wavelengths. Particularly note the overlapping responses between the red and blue receptors.
So, if I play two tones (on a cello, say), I’ll get faint sound of a third tone as well, right? But unlike blending colors, I can still hear the first two tones – indeed, they’re almost all I hear.
Maybe that’s because even a single musical note is already really a blend of several tones (that’s what gives each instrument it’s timbre)? So, to get the equivalent of “red + blue = purple”, I’d have to use pure tones – from a synthesizer, or oscilloscope, or tuning fork?
If so, what is it about things which do blend as colors to make a third color – paint, crayon I think, not sure what else – which is analogous to these rare pure tones?
Right. That’s what I don’t get. The difference between them can’t result in a higher frequency than blue, right?
I can understand how, say, cyan and blue+green can be the same in our eyes. Cyan activates both the blue and green receptors because it’s between both. But violet is not between red and blue. It’s on the other end. And, yet, as long as you use more blue than red, you can get something that looks exactly like violet.
I’m not even sure it’s something to do with our eyes. A digital camera can see the violet of rainbow, which is pure violet light, I believe. And yet the way it detects this is that the blue and red censors activate (blue more than red).
Now that I’ve read Hamster King’s post (thanks!), I think part of the answers to my questions will be about similarities and differences between “rods and cones in the eye”, and “those little hairs in the inner ear”.
Acoustic frequency is a very poor metaphor for trying to understand how color vision and color mixing works. Our eyes and ears use very different sampling mechanisms. The ear uses a large number of receptors each tuned to a narrow frequency band, and the eye uses three types of receptors each tuned to a wide frequency band.
We perceive color based in part by how our cone cells react to the wavelengths of light and how those reactions are multiplexed with each other in being propagated to the brain. The three cone cells are commonly called “red”, “green”, and “blue” but this gives an incorrect impression, because the “red” cell actually reacts most strongly to yellow light. It’s better to call them L, M, and S, for the long, medium, and short wavelengths that they respond to. Their reactions to light can then be thought of (I’m simplifying) as if they were multiplexed into three signals: luminance, green versus yellow, and blue versus yellow.
For violet light (see here) we see a low luminance signal, a neutral green versus yellow signal, and a positive blue versus yellow signal. For blue, we see a moderate luminance signal, a positive green versus yellow signal, and a positive blue versus yellow signal. For red, we see a high luminance signal, a negative green versus yellow signal, and a negative blue versus yellow signal. Finally, for white we see a high luminance signal, a neutral green versus yellow signal, and a neutral blue versus yellow signal.
If you work out the math, it turns out that the signals sent for violet + dark grey = dark red + blue. Since dark red + blue = purple, and since we consider dark grey to be color-neutral, we equate violet with purple.
The key is realizing that the “red” receptor actually responds to a range of wavelength besides red. It actually has a second, smaller hump in its response curve up in the higher wavelengths.
I believe that you are confusing the CIE standard observer color-matching function x-bar with the L cone cell response function. And I believe you mean higher frequencies, not higher wavelengths.
Ear versus eye comparison simply doesn’t work. The ear/brain can hear more than one frequency at a time, the eye/brain can’t. (The eye makes up for this by being able to spatially separate things, we can of course see more than one thing at a time.)
With only three kinds of receptors the eye can do a good job of working out an equivalent perceived colour when stimulated by a single wavelength. It fails utterly if stimulated with two wavelengths. There is no frequency mixing (heterodyning) as with sound. The eye just gets the overlap of the two stimulus sets, and is requires to make sense of them. This means that the eye will perceive colours that don’t exist as pure spectral colours. Purple is one such colour. Purple isn’t violet. There is no single wavelength of light you can find that is purple.
Worth noting that the way the eye works is even messier than the function graphs Punoqllads links to above. Both of those are derived functions that describe notional intermediate processing steps that can be used to understand the eye/brain system. The brain doesn’t see RGB and the cone cell response isn’t a set of peaks, but a set of widely overlapping functions with rather subtle differences that create the notional colour responses. The odd glitch in the red reception in the blue wavelengths isn’t a peak in the actual cone’s response, it is an artefact that comes about from a glitch in the difference in responses of the different cones.
The sensors in a digital camera are not monochromatic-- Like the eye, they have some spread. And those spreads are probably designed to mimic the spreads of the eyes, so as to produce pictures with colors that look right to us. A digital camera designed by aliens would probably look quite weird to us.
I’m not certain that a digital camera can see violet. An RGB display cannot display violet. I suspect that digital cameras have a similar limitation. An RGB display can only display colors within the gamut defined by the position of the red, green, and blue pixels, and that gamut can only include violet at the expense of blue. I suspect that digital cameras operate similarly, but now I’m not certain. I think the CCD array is simply arranged along a diffraction grating to split the wavelengths. I’m not sure how the color translation to RGB would go from there. There is no reason a digital camera couldn’t pick up violet.
It’s amazing how complicated something as apparently simple as color can get.
A digital camera can detect colours that can’t be easily displayed. This is one of the really evil issues in colour management. The sensors in a digital camera have bandpass filters, and have overlap that allows them to approximate the response of the eye. This means that they can get pretty close to an equivalent colour sensitivity (if such a term has meaning.) But when you come to display the colour, it is here you run into trouble. Almost every display technology we have - be it a computer display, printed on paper, projected film, almost everything, displays a wide slew of wavelengths for each of red blue and green. This wide set of wavelengths means that the eye does not see the same mix of RGB that the original sensor did. This probably needs a moment’s thought to realise. For instance, if I see a spectrally pure single wavelength of light the eye or film actually responds with a mix of red blue and green. When I go to display that same colour, I emit light of every visible wavelength in some mixture of proportions. When the eye sees this mix it does not produce exactly the same response as it did to the spectrally pure light. There are a whole set of colours that simply can’t be reproduced. Indeed essentially all spectrally pure colours cannot be reproduced, only approximated. The art of making a display provide a colour that is an acceptable substitute is difficult, and an art as much as a science. This is why you get choices in managing colour spaces.
There are technologies that can get close to the entire visual colour gamut. If you have monochromatic light chosen to provide minimum interference between the eye’s receptors overlaps and you mix these, the available gamut widens significantly. This usually means laser sources. Three laser sources gets you a very wide gamut, and four almost the entire human gamut. But the difficulties in efficiency make this technology still a long way away from consumer devices. There is a movement in the industry to establish wide gamut standards, but don’t hold your breath. Currently the gamut available is based on the response of the available CRT phosphors. Sharp’s four colour displays are an attempt to create a slightly wider gamut, but since digital content is already balanced for the more limited gamut it isn’t clear that it has much to offer.
There are three pigments in your eyes. One responds most strongly to a yellow-green color (thus activating your retina and sending signals into your optic nerve), while also responding to some extent to red and to green. Another pigment responds most strongly to a blue-greenish color, while still responding to some extent to yellow and blue light. The third responds mostly to blue and violet light, but it also, due to its molecular structure, also responds somewhat to red light (while not responding to the colors in between them).
Your eye’s nerve cells do some complex math (in their inimitably chemical way) to figure out what colors of light these pigments were activated by, and they communicate the result to the brain in such way that the color is divided between a red/green signal and a blue/yellow signal. The fact that one of the eye pigments responds both to the lowest and the highest frequencies of visible light is probably just a coincidence; nevertheless, the color wheel depends on this fact. It just happens to be the case that one of our eye pigments is activated by both ends of the visible spectrum while not being heavily activated by the middle of it.