Image sensor ratio

I’m translating a doc, which has this term in reference to its digital camera image sensor: “our own 1:1:4” structure.

It doesn’t explain what these numbers mean. Can anyone help me out here? Thank you!

NM, it is the ratio of green:red:blue. Figured it out.

Really?
More blue sites than red or green?
Generally, the green channel has the most pixels, since the eye is most sensitive to green.

Even if it really was green, 1:1:4 is a weird ratio. According to this site a typical image sensor has RGB ratios of 1:2:1.

It’s the Sigma Foveon sensor, which has three planes in silicon, one each for blue, green, red. The blue is apparently the top plane, then green, then red. It records color info for each pixel instead of having a horizontal array of filters in one plane (hence it’s better, they say).

Let me know if this provides any further insights, thanks!

Odd coincidence that Foveon is exactly the website I just linked to in my previous post. Sigma is one camera maker incorporating Foveon’s sensor.

But the sensor site brags about capturing complete RGB information at each pixel location, so I’m still not clear on what the 1:1:4 ratio is referring to. Is it something unique to Sigma’s camera, or is it specifically about the Foveon sensor they are using?

I believe that 1:1:4 is the ratio of the thicknesses of the blue:green:red sensor layers of the OP’s particular Foveon sensor.

For the top two sensor layers(blue and green), one wants to absorb the target color while letting the other colors pass through to the sensor layer below it. So, in the ideal case:
[ul]
[li]the top layer absorbs all of the blue and passes the green and red; [/li][li]the middle layer absorbs the green and passes the red;[/li][li]the bottom layer absorbs the red.[/li][/ul]
However, for the top two sensor layers there is a tradeoff between wanting to absorb all of the target color without also absorbing the other colors, which one wants to let pass through to the next layer down.

So for example if the blue-detecting layer is too thin, it doesn’t absorb enough of the blue, and if it is too thick it absorbs the blue but also too much of the green and the red.

For the bottom (red) layer, however, it can be thicker because there is no layer below it – and thus no need to let any photons get through. The greater thickness also increases the sensitivity, which is necessary because some of the red photons will already have been absorbed in the blue and green layers.

The Wikipedia page on the Foveon sensor has a diagram showing a layer stack of thickness ratio blue<green<red. However, it sounds as though in the OP’s case the particular characteristics of the blue and green layers are such that the stack works best when they are the same thickness. The bottom (red) layer can still be much thicker than the top two, since it is designed to absorb all of the photons that get down that far.

Hence, a blue:green:red sensor layer thickness of 1:1:4.

Nice, Antonius, but in the diagram I have, blue is show as being “4” and green and red as “1.” In the diagram, each of the colors is shown as given an equal plane in the stack, but the blue on top is divided into four parts.

Any further thoughts based on that? I don’t know if the “4” represents thickness, number of sensors, or what…

It its all quite complicated and depends on what each layer does… Why is it 4 times for blue ?

Is that to make it particularly useful for taking photos of swimming pools and tropical paradise ?
or does that mean that the top two layers really block out most of the blue light and make it hard for the blue sensor and the blue remains garbage even with “4 times” ?

No, because apparently blue is the top layer. I’m trying to find out myself what it all means…