Is there a physical limit on camera sensor ISO?

ISO seems to be increasing like the new megapixels. I think there are cameras with >20,000 ISO now. Just wondering - is there a limit, e.g. when individual photons are detected? Around what ISO would that be?

The answer is messy, as the effective ISO isn’t the entire story about sensitivity, usefulness, and photon detection.

There are a number of ways of working out the sensitivity and then expressing it as an ASA. They don’t yield the same answer. Even for film it isn’t clear cut.

Both film and digital sensors have a limited dynamic range. This is important. This means that there is a level of black beneath which there are no deeper blacks, and a level of white for which there are no more intense whites. For film this range varies depending upon the exact film, and is complicated because film does not have a linear response. Film naturally compresses highlights and shadow, giving it a greater dynamic range in terms of sensitivity than the resultant image range suggests. Whereas you might get a range of 7 or 8 stop (doublings) on the final image from film, under very controlled use, you can use film as if it has 10 stops of dynamic range. That means 10 bits needed to capture the range. Digital systems are close to linear. The actual dynamic range depends upon the range of the analog to digital converter, and the noise in the system.

None the less both film and digital sensors saturate at both ends of the scale. The useful definition of sensitivity (ASA for one) measures the sensitivity of the sensor of film to light so that you have a useful range of sensitivity of both highlights and shadow detail. One definition assumes that the average light incident from the subject averages to 18% of the level of maximum luminance your film/sensor can record. However this has all sorts of problems.

One issue is noise. All sensors have noise. If you have very few incident photons the quantisation inherent with a low number means the image has noticeable jumps in level, and lacks the smoothness of an image that had more photons. Also, the electronics themselves have internal noise, and when there are very low signal levels this becomes a bigger problem. Digital sensors vary their sensitivity by changing the gain on the amplifiers that take the voltage created in the sensor in response to the incident light, and feed it to the analog to digital converters. If you increase the gain you have higher sensitivity, but also more noise. If there is a part of the image that is very bright the voltage delivered to the analog to digital converter may exceed the range of the converter, saturating the values. If the gain is low, or there are very dark parts to the image, they will drop below the least significant level of the converter, and will all appear as pure black. (Within the low levels there will also be random noise, the higher the gain, the more noise.) This is the reason why you have variable sensitivity. (You don’t always just use the lowest sensitivity possible either, there are creative questions with respect to shutter speed (for managing motion blur) and aperture (for managing depth of field) that require modifying the sensor sensitivity.

Very high ASA ratings push the amplification of the sensor output to extremes. There will inevitably be lots of electrical noise. In the extreme the camera output becomes very noisy, and ceases to be much use as a creative tool, being more akin to a surveillance device.

The quantum conversion efficiency of a CCD or CMOS sensor is actually pretty good. It is possible to make them that are getting on for near perfect conversion (90%). But it takes effort. There isn’t a lot of room left however. 30% isn’t unreasonable. The problem with camera sensors is that you can’t get close to this for mundane reasons. The entire area of the sensor isn’t devoted to collecting light, there are dead areas, and this drops the efficiency. Next, use of a Bayer pattern of colour sensors means that any individual sensor element has a colour filter in front of it, so it loses 2/3rds of its photons in order to get you colours. There is no easy way past these issues (the Foveon sensors tackle the Bayer pattern, but don’t expect any radical change in sensitivity, things are closing in on being s good as it can get already.)

There is an inherent tradeoff between sensor noise and the size of the sensor element. The bigger the sensor element the more photons it captures, and thus there are both fewer quantisations steps (when the light level is very low) and generally more signal to dominate over the inherent electrical noise in the system. But bigger sensor elements means fewer pixels for the same sized sensor. So you get more spatial quantisation, and thus a perceptual equivalent to noise anyway.

ASA does not take into account the spatial component of this. Indeed it does not take into account the dynamic range issues either. In principle you could have a sensor with a dynamic range of 6db (one bit) and one pixel. You could calculate an ASA rating for that, and it would be insanely high, but it would be otherwise rather meaningless. A measure that does take into account spatial and noise issues is Detective Quantum Efficiency