Computers and Colours

Whenever you have a selection of colours you can choose from, they’re always red, yellow, green, blue, and purple, with tons of different shades for each. But orange and brown are never an option. If you want those colours, you have to make them yourself by adjusting the amount of red, green, and blue, along with hue, satuation, and light.

I’ve always thought that orange and brown were popular enough to warrant their own line of shades. Maybe this is just a case of colour prejudice… and, if so, what did orange and brown ever do to deserve it? I’ve always found them to be warm and likeable colours, not at all like that hateful red.

But all kidding aside… why is it difficult for computers to generate orange and brown colours?

Monitors can only generate the colors contained within the triangle of the three primary colors it is using. Search for “CIE Chromaticity Diagram”
http://hyperphysics.phy-astr.gsu.edu/hbase/vision/cie.html
http://www.biyee.net/v/honest_charts.htm
http://www.siggraph.org/education/materials/HyperGraph/color/colorcie.htm

Our color space is three-dimensional, we always need three independent values to describe a color. The CIE diagram has only two dimensions, it shows ‘colorness’, but doesn’t tell anything about luminancy. There is a third dimension, the same colors, but extending into increasing brightness along it. The gamut triangle for a monitor is a cross-section going diagonally through its original color cube. It can’t display colors with chromaticity coordinates outside of this triangle, but it isn’t limited to that plane.

Every color monitor uses red, green, and blue light, because this gets closest to feeding the receptors in our eyes with three independent values. If it can mix white from it (or something your eyes are willing to accept to be the white point), it can make you perceive all colors from the three-dimensional volume of a complete color cube. It doesn’t matter if light from other things could make you see a bluer blue, for instance. Your eyes will adapt to the new boundaries and accept them.

When a software lets you choose from a continuous area of colors, it has to map these three dimensions onto a plane where you can click on it with your mouse. There can be used many different mappings, but one value always has to be fixed (or set from another control elsewhere).

The usual color wheel only lets you choose from colors with a fixed brightness. You won’t find a brown in that rainbow, because actually it’s just a kind of dark orange. You have to choose that and select a lower brightness elsewhere. Let’s face it, that likeable brown isn’t really a color. :(:wink:

But why then don’t such common colors appear in the discrete list of colors where it wouldn’t matter which parameters have to be changed?

Tradition. The early color enabled PC hardware wasn’t even able to display orange. It could individually control the red, green, and blue to be either on or off. The text-blinking attribute (of course we’re still talking text-only-terminal era) was optionally interpreted to double the total brightness, leaving us with the standard 16 color PC palette of black, white, red, green, blue, yellow, cyan, magenta, and their brighter counterparts.

Also, these first 8 colors are the corners of the color cube, and every (even non-PC-related) color selection at least just has to base on them. You wouldn’t gain that much from having a line of orange-shades, because you could vary only one parameter, and there are so many colors you still would miss. Still, why don’t they add a reasonable and useful selection of single colors?

Most importantly, instead of putting work into that, it’s much easier for the programmer to give the software some corner values and have it automatically generate an insane amount of intermediate steps between them.

(“text-blinking attribute” … oh boy do I feel old again :()

You would have thought there would have been some pressure from the terminal era to consider orange an important choice (given that you weren’t stuck with a 16 color display). The reason I say this is that monochrome terminals typically came in white, green or amber on black, and I know that a lot of people preferred the amber terminals over white on black (and most people hated green - yet, some manufacturers pushed it). For a long time I used to go out of my way to configure my text windows to be amber on black, with a cornflower blue background for reverse video - an arrangement I liked from using an old color terminal. And I always had to futz about with the color mixing choices to get anything close to the orange “amber” text.

Tradition my foot. The fact is that the three colors RGB generated by the monitor’s phosphor enclose a certain area of hues on the cie chart and leave others out. Luminance has nothing to do with it as you can go from 0 to 100% but you are limited to the colors which can be generated by the three corners of your triangle.

The RGB triangle includes the orange shades, as evidenced by the fact that you CAN achieve them with the RGB or HLS mixer controls supplied by your software. Most of what the RGB triangle leaves out are shades of green and blue. What we seem to be discussing is why typical application designers chose the particular set they did for preset color choices from within the RGB space.

For reference, I just brought up my command window, which was white on black. The 16 presets available in the are constructed from various combinations of levels of 0, 128, 192 and 255 for RGB levels. The 192 is only used to produce a light gray tone as one of the choices. The rest are all mixtures of 0, 128 and 255. I would guess that most designers simply give you choices arrived at as a few full and half-bright combinations, whatever colors those happen to come out to be, as suggested by femtosecond.

>> What we seem to be discussing is why typical application designers chose the particular set they did for preset color choices from within the RGB space.

In that case ignore all my posts. Obviously I thought we were talking about the limitations of monitors.

Sailor, I’m missing something here. True, you can’t generate every possible color from just R G and B, but it’s my understanding that the eye basically sees in combinations of the 3 colors. Aside from the fact that computers can only vary the amount of each color in discrete steps, it would seem to me that the computer and the eye are faced with exactly the same limitations, so the eye shouldn’t be able to see any more colors than what the computer can generate.

Anyway, even if my understanding of human vision is wrong, the video card of a computer is certainly capable of generating numerous shades of brown. Just look at the games Domo and Quake for example. They use a lot of dark colors and earth tones, and with variable lighting and shading you end up with a very wide range of all of those colors. You can’t possibly say the computer can’t generate them because they aren’t in the color triangle or whatever, because you can see them on the screen.

I don’t know exactly where the default windows pallette came from, but my vote is on tradition too. The first colors that absolutely have to be in there are the ones from the old 16 color and mono color video cards. In the early days a lot of programs had to run on both types, and you wouldn’t want your program to look weird on some video cards and ok on others. I’m not sure how the rest of the pallette got filled in though. If all else fails, blame it on microsoft.

did you check the links I posted? It is explained there

The CIE 1931 Yxy color space is absolute and device independent. Of course the absolute color range is limited for every display. For instance, it’s simply impossible for a color monitor to exactly copy the look of an old amber display. You just can’t mix that pure color from the light of its phosphors, the chromaticity coordinates lie outside of its gamut. Sorry, yabob. :slight_smile:

When you look at a monitor your eyes do a good job adapting to its black point and white point, and therefore also to the boundaries of its color volume. There are a lot of absolute colors left outside, but the relative volume is accepted as being the new absolute one by the eye.

The internal RGB color space of the computer is relative only to itself. It depends on the display device to where it gets mapped absolutely in the Yxy space, but internally it is a complete color cube, with three values ranging from 0 to 255, and this is the color space we move around in when talking about computer colors.

If all else fails? I think that’s what you always should start with. :slight_smile:

Seriously, the microsoft paint(brush) program set the standard for color palettes in the windows world, it was the first and other applications had to follow it. The early EGA monitors were controlled with digital on/off RGB signals (plus the brightness bit), so there wasn’t a point in having more than the fixed 16 color palette, because that’s all what the hardware was able to display.

I’m glad my memory is fading about which came first, software simulating additional colors with dithering patterns, or VGA cards being able to display them. At least the color dithering would explain why when the standard windows palette finally got extended, all the color values were modulo 64. On an older card, a color with values not divisible by 64 would have to be translated to an ugly pattern of dots.

They don’t still expect their software to run on EGA cards, I hope. But maybe $A08040 really is just easier to type than $EAE6D1. And you can make most people happy with simple and pretty colors, the more the better. Not many enough seem to complain about that most of them are useless.