24 versus 32 (bits per pixel) Color (the other 8?)

Yes, I know there is no difference between 24-bit and 32-bit color on our modern computers, that they are different terms used for the same digital colorspace (also known for Windows users as “True Color” and for Macintosh users as “Millions of Colors”).

I know that only 24 bits of the 32 are used for RGB values.

Whenever I’ve come across a passage explaining this, it reads something like “…the other eight bits are used for encoding other characteristics such as transparency”.

OK. “Such as transparency” = “transparency and a few other things”? What other things?

And with transparency itself for that matter – Photoshop has as part of its feature set the ability to make a layer or a selection transparent to a percent set by the user, and Aqua under MacOS X uses transparency to create translucent menus and dialogs and whatnot, but are Photoshop and MacOS X using the extra bits inherent in 32-bit color? (Actually I assume not at least for MacOS X since many folks run MacOS X in 16-bit color for performance reasons and they still get translucent/transparency effects).

I’ve never seen a JPEG or 32-bit program icon or other 32-bit color file that exhibited characteristics not present in 16 or even 8 bit equivalents, aside from the wider range of colors and the things that come with that (smooth gradations, natural-looking imagery, etc).

Would someone in the know please do a brief tutorial on these other 8 bits, preferably with links to example image files? I’m really curious about these mystery bits and what they are good for…

There is a difference. Download a quality picture, I mean really high resolution with lots of colors. Then view it in 32bit, after that, switch it to 24bit. You will see color loss.

II’m not aware of any 32 bit RGB formats. The RAW format from my Dimage 7 camera is 48 bit but that’s still RGB. There are 32 bit format s for CMYK but that is a printing format. In theory the subtractive primaries should be able to produce any color but it doesn’t work in the real world with inks so that’s why offset printing is done with cyan, magenta, yellow and black.

Not true. That said, there are systems that can distinguish between 33 billion colors. True Color simply refers to the optimal number of shades required for an average human eye to think the colors are equivalent to realistic shading.

To the OP :

The extra 8 bits are for indeed for transparency.

Those 8 bits are also called the Alpha Channel.

You can store an Alpha Mask of the image there.

What’s the Alpha Mask ?

Well, just read this.

I’ve worked with Alpha Masks for 12 years now, but I can’t explain it as succintly as this.

I don’t think I’m sold on the alpha channel thing. Your link is talking about the .psp format, and if I’m writing my own picture format, I can add support for text, music, and whatever else I want. Certainly I can see the advantage of transparancy, but I think the question is what’s the OS doing when you set Windows/X-Win/Battlefield1942 to 32bit. Sure my icons get transparent at the edges, but is this the work of 32bittiness?

You misunderstand. I don’t know how the icons are rendered transparent but I wouldn’t be surprised.

Just that I’ve worked in CG animation/compositing for 12 years now and that when we save a picture (say .TGA) as 32-bit, we also save a transparency mask with it.

The simplest alpha mask is a B&W image embedded within the file. White shapes indicate transparency at the same location within the actual picture.

Check this one as well.

I can’t comment on Gyan9’s answer, because his seems to contradict my understanding of how computers do RGB, but I do know that there is a difference between 24 bit and 32 bit colour. In 24 bit, each colour gets 8 bits (naturally). In 32 bit, red and green get 12 bits, and blue (to which the eye is less sensitive) gets 8 bits. Any programmer who’s coded 32 bit colour software has to deal with this because the gray-scale values are different (in 24 bit colour mapping, the “grays” are the values that have the same number in each colour; so Red1+Green1+Blue1 and Red200+Green200+Blue200 are both grays).

Alright, I actually bothered to follow Gyan9’s links, and I can comment on them. Nanoda is correct; what he is describing is more than just the RGB colour representation of a graphic image. A 24 bit bitmap has a 8/8/8 split for RGB, and a 32 bit bitmap has 12/12/8. The alpha mask is something else entirely.

For the final time…

Not all 32 bit images are saved the same way. The 32 bit images that I work with have 8 bits reserved for the Alpha Channel.

And I like mentioned elsewhere, true color is an approximation. I doubt the average human eye can perceptively differentiate between 24-bit color and 32-bit color. To that end, you can represent colors in how many ever bits you want. You could theoretically have 96-bit color. But it would be a waste of space.

A 32-bit image isn’t independent of its file format. So, it’s pointless to talk about a 32-bit bitmap without referring to the file format containing the image. Once you know that, it’s not a matter of debate anymore, just look up the format spec. FTR, the images I work with and the s/w I work with (Max, XSI, PS7) save 32-bit images as 24 color/8 alpha

From the .TGA format spec which is among the more popular image formats in CG for non-film work (especially TV/broadcast)

Each color map entry is stored using an integral number of bytes. The RGB specification for each color map entry is stored in successive bit-fields in the multi-byte entries. Each color bit-field is assumed to be MIN(Field4.3/3, 8) bits in length. If Field 4.3 contains 24, then each color specification is 8 bits in length; if Field 4.3 contains 32, then each color specification is also 8 bits (32/3 gives 10, but 8 is smaller). Unused bit(s) in the multi-byte entries are assumed to specify attribute bits. The attribute bit field is often called the Alpha Channel, Overlay Bit(s) or Interrupt Bit(s).*

I get the impression that the OP knows what transparency is. Certainly we’ve established that an image can be saved either with an alpha channel or without. Presumably a 32-bit image is either 8/8/8/8 or 12/12/8 as people here are saying. The question I’m reading is, why would the explanation the OP read say ‘such as transparency’? Is there other information that could or would be stored in that last 8 bits if the RGB channels are only taking up the first 24 bits? ‘Such as’ implies that there is.

Even if the OP isn’t asking, I am.

Well, it’s certainly one of the things I’m curious about.

In light of what Gyan9, Cerowyn, and Nanoda have said, perhaps it would be useful to start with how the Windows and Macintosh operating systems themselves – what they are prepared to draw on-screen given their various native screen bit depth settings and a standard 32-bit color image in a common format (TIFF, JPEG); what those image formats are prepared to store within themselves in those 32 bits’ worth (especially the mysterious extra 8 bits); and how the operating systems will display those images when the display is set to 32 bit color mode as opposed to 16 bit color mode. After all, the descriptions of color bit depth I was reminiscing about in the OP were pertaining to OS-level settings, not scanners or digital cameras, which often record 48-bit or more (and, IIRC, throw the less useful info away when the image is saved to disk as TIFF?)

If the extra bits are (optionally and/or occasionally) used for 12/12/8, is my Sony Trinitron gonna know what to do with 12 bits’ worth of red info? Will my MacOS or my friend’s Windows operating system know how to tell the Sony Trin about them in the first place?

If the extra bits are (optionally and/or occasionally) used as transparency/alpha mask info, how is THAT handled? (I don’t recall ever seeing a partially transparent JPEG or TIFF. I’ve seen partially transparent GIFs but GIFs are 8 bit images!)

but are Photoshop and MacOS X using the extra bits inherent in 32-bit color?

Except that the color isn’t 32-bit. The pixel is represented using 32 bits of which 24 represent color.

An alpha mask is the main information stored in 32-bit image file formats. Other uses simply means that you could use that information to discriminate the application of certain raster image effects.
Transparency happens to be the most common use. For example, if you wanted to apply a certain glow to the silhouettes of certain moving objects in an animation. You could conceivably have a plugin that writes a number in the alpha field of a pixel that is part of the border of that object. Different numbers would completely describe the borders of all objects (<256) in that picture. Now, you could load the pictures in an image editing program with a parser to read the alpha information. Then apply the effects only to those pixels or to those pixels within a certain threshold of the marked pixels.

Presumably a 32-bit image is either 8/8/8/8 or 12/12/8 as people here are saying.

Except I haven’t seen cites to that effect. Look here. That is the logged output of a 3DMark test. Scroll down and check the Display Modes section.

On each display mode line, there is a three digit number listed in brackets. That is the bit allocation for RGB

e.g.
*
2048x1536x16-bit RGB [565]

1280x960x32-bit RGB [888]
2048x1536x32-bit RGB [888]
*

Further down, there’s

*TEXTURE FORMATS
32-bit ARGB [8888]
32-bit RGB [888]
16-bit RGB [565]
16-bit RGB [555]
16-bit ARGB [1555]
16-bit ARGB [4444]
*

And I strongly doubt that any image format intended for monitor display expresses R,G or B using more than 256 levels of intensity (hence 24-bit color) for 2 reasons :

  1. The average human eye can’t perceive more than those many levels of intensity. Take it with a pinch-of-salt cites here and [url=]here. Furthermore, any refined intensity perception is at the lower scale. Meaning that you can distinguish between a 20 watt and 50 watt bulb, but not between a 5000 watt and 5030 watt bulb. In other words, intensity perception is logarithmic, not linear. Which brings me to my second point

  2. Even if for some reason, you used a 12/12/8 split, how could you convey that on a computer monitor ? The DAC on a video card has to convert all digital data to analog signals for the guns of a monitor. A 12 bit Red signal signifies 4096 intensities. But this is a linear scale and is wasted space and since more intensities only make sense at the lower end. Your typical DAC on a standard video card is a 8-bit D/A convertor for each color stream. So, your video driver would have to resample your 12-bit R back to 8 bits before it passes it on to the DAC. And that defeats the whole purpose of creating a 12/12/8 format. Especially for storing icons and texture maps for video games to be played by end users.

First, like I said, if there are extra bits (>24), they are always used for extra info and never for the colormap itself.

About the handling, it depends on the program reading the file.

Many times, I’ve saved 32-bit BMPs and have found Photoshop unable to open them. Then, I open them in ACDSee, which does understand the 32-bit BMP, but can’t write it. If I then resave the BMP in ACDSee, I’ll be able to open it in PS, since while rewriting, ACDSee discards the alpha bits.

In simpler words, depending on how the file format parser is written, the alpha will either be ignored, discarded or used. It’s upto the program/reader.

Thanks! Great tutorial!

What’ll really confuse you is whether, for an image, you’re talking about a single pixel being defined as 32 bits of r/g/b/alpha or as an index in a color table. For example, a GIF image has a maximum of 256 colors, but they’re indexed. That is “color 1” is really defined in terms of a single RGB value, and so on. So you have a GIF capable of either 65535 or 16.7 million possible colors (16- or 24-bit color map) of which you can only use 256.

Also, while it’s nice having 16.7 million colors available, remember that only 786,432 can ever possibly fit on your screen at the same time (assuming 1024x768).

So 12 bit color is pointless because they use an 8-bit DAC? Why wouldn’t they just use a 12-bit DAC?

search on google found this.

basically, if a file format is 32-bit color then it has 24-bit rgb plus an alpha channel. if an os like windows has 32-bit color, then its just 24-bit rgb aligned at 32-bits to allow faster memory access with the extra 8-bits not used.

Because there’s no need for one.

8/8/8 can adequately represent enough shades to satisfy the average human eye.