68 Billion Colours!!!11one. Designed for ET?

I just saw an ad for a TV (Sony Viera HD) claiming to be capable of displaying up to 68 Billion Colours. Assuming anything ever delivered a signal to it that has more than say a million colours what would be the point if humans can only see 16 million?

In other words… Is it true that Human eyes can only distinguish about 16 million colours? If not, how many, and what’s the QI GQ on it?
On a similar note, how high does a ‘frame rate’ have to get before it becomes too fast for the viewer to perceive any difference?
P.S. It seems unlikely when you consider that 16 million happens to be the sum of 256x256x256 (32 bit ‘True Colour’)

I think the reasoning behind that is that even if you can’t see some of the colors they may still change the way you perceive other nearby visable colors. So as long as the camera can pick them up, they’re displaying them to keep everything true to the original.

IIRC, there were some studies that suggested that, contrary to previous studies, 24-bits (16.7 million) isn’t enough shades to encompass the range of of human vision.
The idea being that if you’re in a very red environment, say, the way you perceive red changes and you’re able to resolve more reds than if you were in a more multicoloured environment.

As I said, this is IIRC.

Then of course there’s the dynamic range of a screen, which indicates the difference between the darkest black and the lightest white link. Traditional CRT screens can only display a small fraction of the range of intensities that we can perceive, LCDs go further but still are nowhere near the range we can see. Whether or not this should count towards the number of colours…I’m pretty confused about.

They’re made for the Comcast turtles …

(turtles are among the animals that have 4 color receptors, rather than three)

In computer games at least IIRC they say anything over 60 FPS is unnoticeable to the person playing. But that is because there is no motion blur in games. We get away with many fewer frames per second (24) in film because there is a motion blur. Television runs at 30 FPS and needs to because the environment is different than a movie theater (generally brighter) and the way TVs generally work in refreshing the screen.

According to this guy, humans have four colour-receptors as well, and our retinas are sensitive to ultraviolet. It’s our lenses that block the ultraviolet. There do seem to be lots of assertions on this site that can be tested.

I remain strangely-intrigued by this. I want to see more colours, dammit!

Seems like an easy solution. They make a blade, similar to a carpenters plane, that takes your cornea off.

http://www.snydereyedoc.com/Advanced%20LVC/LASIK/Procedure1.png

Oh dear Og. :: shudder ::

I think it’s the lens that does most of the UV blocking, anyways.

It’s not even enough to encompass the range of a cheap computer monitor. 24-bit color means 8 bits per channel, which means it can display 2[sup]8[/sup]=256 shades each in red, green and blue. That gives you 256^3=16 million combinations. But gray is made up of equal amount of each color, so it can only display 256 shades of gray. This image is a 16x16 grid showing 256 shades of gray. You can see the vertical bands as well as horizontal, which means 256 shades isn’t enough to make a gradation that looks smooth to the human eye.

68 billion would be 12 bits per channel, capable of displaying 2[sup]12[/sup]=4096 shades. I’d consider that adequate. Most digital cameras use 10 or 12 bit, I believe.

Back in the day when I wrote photo-retouching software I could easily spot a 1 bit difference in 24 bit color.

Instead of using using 8 bits per channel, it sounds like the Viera is using 12. That’s about what you have to do if you really want 1-bit differences to be below the threshold of human perception, particularly if you’re using an RGB color space.

(RGB is not the most efficient way to encode images because the human eye is more sensitive to luminosity than hue and the green channel in an RGB image contains significantly more luminosity information than the red and blue channels. So some of the bits in the red and blue signals are wasted. However it’s simpler from an engineering perspective to be able to process all three channels the same way.)

The linked grid does not look smooth because each square has 8 bits different from the one above. Here is a gradient with 256 shades of gray. Here is one the is 8 bits different, left to right. The bar in the middle is solid gray, no gradient.

Here is a red picture (255,0,0) with letters in the middle (255,1,1). I can’t see them. Can anyone else, without taking them into an image editing program?

I personally think 16.8 million is plenty.

I was referring to the vertical banding, i.e. the discernible transition between each square and one to its right or left. It’s especially noticeable among the darker squares.

Of course dithering can make the banding much less obvious, but nevertheless, there are many situations where 256 shades is insufficient. And the problem is worse for cameras, because the real world has more contrast than even the best computer display. Which means you need more bits to cover the wider range of brightness.

On my monitor, the banding is fairly obvious.

What’s SMDB?

I didn’t necessarily mean R8G8B8 (although if you set Windows to 24-bit that’s what it uses. Note also that the “32-bit” setting is just 24-bit colour plus 8 bits of padding).
I just meant that supposedly, 16.7 million comfortably accomodates the palette of the human eye, but a lot of the manufacturers that claimed that (way back when TrueColor was becoming the norm), don’t say that any more.

I don’t quite get what you mean by “not even enough to encompass the range of a cheap computer monitor”. AFAIK, video cards and windows use R8G8B8, so why would monitors be made with this redundancy? Is it just future-proofing?