Can a camera's image sensor image itself through a mirror?

Can the image sensor in a camera photograph itself? I’m sure plenty of us have taken photos of ourselves in mirrors. Sometimes with camera phones, sometimes with real cameras (even film!) But if you boil down to it, can the image sensor actually image itself? Or theoretically should it come out in the photo as completely black, as it’s supposed to be consuming the photons in order to produce the image in the first place?

Yes, absolutely. Image sensors don’t absorb 100% of the light. Some of it is scattered, some of it is reflected.

Of course manufacturers try to minimize the scattering & reflection. An ideal sensor absorbs 100% of light that hits the sensor. But in real life, nothing is perfect. Just open up any mirrorless camera, you’ll see that the sensor isn’t a perfect black. And it doesn’t get any blacker when it’s in the process of taking an image.

The actual sensor array isn’t anything close to an array of sensors with each trying to absorb 100 percent of the light. What they typically use is a repeating pattern of four elements arranged in a grid. There are two green sensors, one red, and one blue in each square of four elements. This pattern repeats over the surface of the sensor.

It’s called a Bayer pattern (named after the engineer at Eastman Kodak who created it). Since each pixel in the pattern can only sense one color, all sorts of fancy interpolation algorithms are used to create the final image.

This quickly gets to be a lot more complicated than can be explained in a simple post, so if you want more info, start here:

The long and short of it is that each pixel is going to be rejecting a majority of the light that hits it, since it will only absorb the color that the pixel is designed to capture. The rejected light will be scattered, reflected, and a small amount of it will be absorbed and turned into heat.

So yeah, plenty of rejected light is available for the sensor to take a picture of itself in the mirror.

I think the Bayer filter absorbs most of the “rejected”, rather than reflecting or scattering. There are filters that transmit certain wavelengths while reflecting the rest (called dichroic filters) but those are more complicated and expensive than simple absorption filters. Also, if the filter reflects light, some of it will be reflected back towards the sensor by the lens, creating unwanted artifacts, so you want to minimize that as much as possible.

With some exceptions.

Don’t forget that there is a lens in front of the image sensor. If you are trying to take a picture of the sensor the image is passing through the lens twice…

For a given camera lens, there might be some specific distance which you have to have the camera at in order to get an in focus image of the sensor. Photons from the sensor surface will have to travel through the lens assembly twice - and arrive back properly focused.

If you really wanted to see the sensor, there’s some arrangement of lenses you could put in front of the mirror to (mostly) compensate for the camera’s own lens.

Stand inside a large camera obscura such as any of these. You are the sensor. The outside is imaged through a relative pinhole. Where outside do you place a mirror to view yourself?

While reading this thread, I’ve tried something similar–photographing the sensor in one phone using another phone. I didn’t spend a lot of time on it, but I got no success. The sensor is way too tiny and distortion from the lens (and glass cover) prevented me from being able to discern anything.

Pointing a camera hooked onto a 4" telescope at the sensor in question from a distance of 20 feet is far more likely to get you a decent image of a photo sensor on a second camera. Focus will be a major issue on camera sensor selfies. There likely is a solution, but It’ll be hard to find, and quite likely low resolution to boot. There’s too much optics involved for an easy answer.

Correct me if I’m wrong but I think each individual 4 sensor array is going to record one pixel. A pixel is one colored dot. So an individual sensor array can’t be resolved as 4 different sensors.

So what resolution do you consider to be “imaging itself”?

In terms of raw data from the imager, any given pixel will only contain a value for reg, green or blue. This raw data is converted to a usable imaging through a process called demosaicing. In a very simplified explanation: assume the pixel at location (x,y) is beneath a red filter. It’s raw value is the red component of that pixel. The blue and green values are interpolated from the neighboring blue or green pixels. Similarly raw green pixels interpolate red and blue from the neighboring red and blue pixels; blue pixels interpolate red and green from the neighboring red and green pixels.

To get back to the OP: Describing the sensor as “consuming” photons is a little bit misleading. There is no process going on which actively sucks in photons to make the image; the photons are absorbed at the same rate by the image sensor whether the sensor is running or not. Photons absorbed by silicon form electron-hole pairs via the photoelectric effect. When the sensor is actively operating, those electrons are collected and the number collected over a fixed period of time are converted to a digital signal proportional to the intensity of the incident light. If the sensor is not operating (or even if you have a plain piece of silicon) the electrons and holes eventually recombine.

In an ideal sensor every photon which hits the sensor will be converted to an electron. In real sensors this is not the case. The percentage of photons which are successfully converted to collected electrons is called the quantum efficiency (qe). What happens to those other photons? They may be absorbed in areas between pixels so the generated electrons are lost; they may be absorbed or reflected by other circuit features built into the same silicon; they may be reflected by the protective oxide layer that coats the silicon surface; or in the case of a color sensor they may be absorbed or reflected by the colored microlenses which cover the actual sensors.

What do you see when you look at a sensor? For the sort of sensor in your phone you will see that array of colored microlenses. Since the whole point of the colored filter array is to not absorb selected wavelengths of light (a red pixel should have by definition terrible qe at blue and green wavelengths) a substantial amount of light is reflected so the array should be clearly visible.

What about a very high qe monochrome (no color filter) sensor? I have one sitting on the desk in front of me. To the naked eye it looks like a featureless black square. With the correct instruments I could decide that it is a very dark grey. With the correct optics I could arrange for it to take a picture of itself - and I would see a featureless very dark grey square. Because that’s what it look like…

And in case anyone’s wondering, black and white CCDs typically have very high quantum efficiencies, in the range of 90% or more even for the cheap off-the-shelf ones. Which is why Marvin’s sensor looks very close to black.

I was thinking if you could see in through the lens, past the open shutter, past the anti-aliasing filters, it might be like staring into the abyss :smiley:

You’ll smudge noise oil on the film.

From a theory standpoint, why would or wouldn’t the camera’s sensor change in appearance when it was actively taking an image, as opposed to powered off? Or a regular photovoltaic panel when it is under load vs. no load?

I’m working here with the mental analogy of a hand-cranked generator that is easy to turn when there is no load, but becomes harder to turn when you put a light bulb into the circuit. Is the sensor or solar panel absorbing the same amount of light whether there is an open circuit? Does it just emit less absorbed light as IR if it is under load?

Please educate me…

Sensors and photovoltaic panels work the same way - a photon is absorbed by the semiconductor, giving it enough energy to generate an electron-hole pair. The rate of absorption is constant, regardless of what happens to the electrons and holes generated. I can’t think of a good explanation of why it’s constant, I just never came cross any mention of it being a variable (and I’ve worked with various types of semiconductor detectors throughout my career).

This isn’t adding much new, but to summarize.

Last question first: No, which is why image sensors are perfectly photograhable with a different camera. I don’t think anyone has bothered imaging one actively registering an image, because it wouldn’t make a noticeable difference.

Main question: Probably not, because as has been mentioned already, you need the optics of the camera. And also, to get a clear image the sensor needs to be enclosed in a way that eliminates ambient light.

So we get a closed loop. If we pick the point to be imaged as the starting point:

  1. light has to leave it
  2. go backwards through the optics
  3. reflect off the mirror
  4. go back through the optics
  5. get registered

And there has to be enough light in step one to compensate for all the loss in steps 2-5.

Is that possible for the current technologies? Or would the sensor, even if step one was flashing it with an ultrashort pulse, still be “charged” by that pulse by the time it returned from the mirror?