Vision and wavelength/frequency

What, if anything, is the relationship between visual resolution (be it human eye or electronic sensor) and the wavelength and/or frequency of a given illumination source?

Asked another way, why is that we can see images in the visual, IR (with sensors), X-ray, etc. parts of the EM spectrum, but not in the ultraviolet or microwave ranges?

UV photography is less common than IR, but it can be done.

That’s really two distinct questions.

  1. Resolution: The resolution of an optical imaging system like the human eye is specified by the minimum angular distance below which two point sources will merge into one. Physics places a limit on this that is determined by the ratio of the diameter of the aperture (the pupil of the eye, for example) and the wavelength. Thus, it pays to have a big aperture (like a large telescope or a telephoto lens) and a small wavelength. Under normal circumstances, you are also limited to spatial resolution not much smaller than the wavelength. The distance between the sensors (for example, pixels in a digital camera, or rod cells in the eye) places another limit on resolution. In principle, you should be able to get the best resolution using Xray’s or gamma rays. The problem with those is the difficulty of making lenses or curved mirrors.

  2. We can make imagers anywhere we like in the electromagnetic spectrum, including ultraviolet and microwave. If you’ve been through an airport recently, someone may have looked at a low resolution image of your body made with microwave radiation, which can easily pass through clothing, but bounce off your skin. Human vision evolved in the visual part of the spectrum for several reasons including: the peak of the Sun’s output is in the visible, the energy required to split a water molecule is comparable to the energy in a visible light photon, biological materials can be transparent or opaque and have the correct properties to allow formation of lenses. Other organisms (snakes for example) can detect infrared, and even form crude IR images.

Imaging can be done at pretty much any wavelength, but some are more difficult than others. There are a few problems. At the high-energy end, the biggest problem is usually in focusing an image: It’s easy to find a material that will outright absorb, say, a gamma ray, or one that a gamma ray will pretty much ignore and pass straight through. But it’s difficult to find a material that will refract a gamma ray, to make a lens, or reflect it, to make a mirror. So you have to either use a pinhole camera (and either have a really bright source or a really long exposure time), or use a complicated system of baffles and reconstruct the image mathematically. The RHESSI satellite uses the latter method to produce images of the Sun (example image). In the x-ray range, it’s possible to make mirrors, but they only work at very glancing angles, so the mirrors have to be designed very differently.

For the low-energy end of the range, it’s still usually difficult to make lenses, but “mirrors” are no problem: Even a loose mesh of metal, with very forgiving tolerances, will make an effective radio mirror. There, though, the problem is the Raleigh criterion: Your detector needs to be very large relative to your wavelength in order to make a sharp image. Of course, there’s no theoretical limit to how big an antenna can be, and you can combine multiple radio telescopes separated by some distance via interferometry, to get the same effect as a single very large telescope the size of the separation distance, and so get images that way, too (sample image from the VLA).

Oops, I read the responses a few days ago and was too fascinated to respond.

So, belatedly, thank you, especially JWT and Chronos.