A sort of Ultimate Zoom Lens that would allow you to examine in minute detail small objects removed at a distance? If so what arrangement of lenses would accomplish this?
I am not sure how that would differ from a powerful telescope?
I think the problem is that with the scattering of light over a long distance, there’s no way to get enough resolution to make out tiny items. I’m sure someone better informed than I can give a more detailed (and more correct?) answer.
The limiting factor is going to be light. That’s why big telescopes are, well, big. They need to gather as much light as possible to see small, dim objects. The more magnification you want, the more light gathering capability you need, so the front end of your lens is going to be gigantic.
Secondly, when you want to magnify something that much any imperfection or instability you have is exaggerated. The mount has to be completely solid with no vibrations or your image will be a blur. If you want a zoom lens then the mechanism by which the individual elements have to work flawlessly or you’ll end up with an out-of-focus image. Generally the bigger the zoom range of a lens the softer it is. This is why fixed focal length lenses are generally much sharper than a zoom.
You can build a lens arbitrarily large, but the technical details will make it useless until the state of the art improves.
[QUOTE=Telemark]
The limiting factor is going to be light. That’s why big telescopes are, well, big. They need to gather as much light as possible to see small, dim objects. The more magnification you want, the more light gathering capability you need, so the front end of your lens is going to be gigantic.
[/QUOTE]
The size of the initial aperture also affects the resolution of the telescope, due to the wave nature of light and a phenomenon called diffraction. Basically, to resolve two close objects a certain distance away, you must have
(distance between objects)/(distance from observer to objects) > (wavelength of light used)/(aperture of telescope).
This means, for example, that if you wanted to view two objects the width of a red blood cell apart from a distance of a mile away in visible light (wavelength of about 500 nm), you would need a telescope with an aperture of 100 meters across — about 330 feet. Obviously this is impractical for a hand-held device.
Other effects, such as distortion from the atmosphere, the dimness of the image, and the instability of the image, can in principle be worked around. But the diffraction limit is as good as you can do for a given wavelength of light.
Actually, you can get around the diffraction limit, too. Integrate over a long enough period of time, and your signal-to-noise ratio will eventually get to a point where you can tell the difference between one object and two. It’ll take an extremely long time if you’re below the diffraction limit, but there are no absolute cutoffs.
There are ways to “beat the diffraction limit”, but simply averaging over long periods of time isn’t one of them. If it was, they would have been using it in astronomy for decades. And in the absence of structured illumination or other such dodges, the diffraction limit cutoff frequency IS an absolute cutoff.
Or have you heard something I haven’t?
Highly unlikely, though it may be that we have different notions of what it means to beat the limit. I would say that an angle is resolvable if you can distinguish between a single point source and a pair of point sources separated by that angle. A single point source, once you pass it through your optical chain, will appear to be a circularly-symmetric fuzzy blob. Two point sources separated by more than the diffraction limit will appear to be two circularly-symmetric fuzzy blobs. Two point sources separated by less than the diffraction limit will appear to be a single elongated fuzzy blob. Now, with low SNR and low elongation, you might not be able to tell the difference between the circular blob and the elongated one, but for any given separation, there is some amount of SNR where you can detect the elongation.
If your goals are more modest, there are telephoto macro lenses out there. Something like the Canon 180mm 3.5 L with a teleconverter would let you take macro photos of insects from a few feet away rather than the extreme close ranges usually associated with macro photography.
You mean mirror, not lens. Lenses have a maximum useful size of about a meter or so. As lenses get larger, their centers get thicker and heavier. But they are only supported around the edge. So they will sag and not hold their figure. This is why all the big astronomical telescopes are reflectors. Mirrors can be supported across their entire diameter.
And as for the state of the art, currently the largest telescopes are in the 10 meter range. There are several much bigger scopes being planned, the largest is the European Extremely Large Telescope at almost 40 meters.
You seem to think you can beat the diffraction limit by simply improving your signal to noise until you can use math to untangle the diffraction patterns of the two stars. Laughing at the Raleigh criterion, in which the maximum of one star coincides with the first minimum of the other, and even at the Sparrow criterion, at which the dip between the two disappears. If there is any elongation, you say, knowing what the intensity pattern is for my telescope ()ideally, an Airy function for a diffraction-limited system), you ought to be able to deconvolve the two patterns with good enough software.
But it’s a chimera, like Harrison Ford’s character pulling infinite detail lout of that photograph in Bladerunner. Like him, you can’t pull out information that isn’t encoded in your picture. And even if your recording medium has infinite resolution, the light transmitted by your telescope won’t. The Modulation Transfer Function of the telescope is a measure of the spatial frequency response of the optical system. It may look as if it dies away asymptotically, but it has a hard cutoff at a frequency inversely proportional to your pupil size (which is why telescopes have big apertures). It simply will not transmit spatial frequencies higher than that limit. This means that, when your overlapping Airy discs ought to register their offset by an infinitesimal shift in the intensity at some close separation, it won’t do so, because the telescope can’t transmit information with that small a shift spatially So even if you take data forever and get a perfectly clean signal to deconvolve, you couldn’t deconvolve the two, because that fine detail of the shape simply won’t get through your telescope.
So, no, there is a hard and fast limit to how finely you can resolve two sources, angularly or linearly, and it’s set by the size of your aperture.
This is why they go to such lengths to make the apertures as large as possible, and create synthetic apertures with wide separations between the components, instead of putting their efforts into long exposures to eliminate noise.
CalMeacham, I agree with Chronos, even though for most applications the point is largely academic. The modulation transfer function doesn’t abruptly fall to zero. If you know that the image is two point sources, that your imaging system is well characterized and if you have plenty of signal to noise, you can resolve the spacing to a small fraction of the Rayleigh criterion.
In fact there is a whole family of super resolution microscopy techniques based on localizing point sources (single molecule emitters) with resolution an order of magnitude or more better than the Rayleigh limit. The more photons you count, the better the resolution.
I was skeptical that the OTF would go to (and stay) exactly zero outside a finite bound, but CalMeacham’s cite, and another I found, both say the OTF is the autocorrelation of the pupil function, which would be of finite extent. Is this not true? Are there approximations made in the derivation somewhere when they derive that result?
I’m sorry, but you’re incorrect. The MTF is indeed zero at the end, and counting photons beyond that point won’t help your resolution. You can, by techniques such as the use of structured light (which has been used in microscopy), extend your frequency range, but even that will only gain you a factor of less than two. As I say, even if you think you know what your source really looks like (and in astronomy, for any average-sized telescope, or even a Mt. Palomar, a star will effectively be a point source), but any reconstruction algorithm is going to rely upon differences between a single Airy function and the actual mixture-of-two function which (even though your algorithm could deconvolve them) your telescope simply cannot deliver.
This is correct.
Even a lens such as this won’t do it:
http://leicarumors.com/2008/12/11/the-most-expensive-lens-in-the-world-its-a-leica-of-course.aspx/
To be clear, I wasn’t saying that you could deconvolve the two sources, but simply that you could determine that what you were seeing was not a single point source. You might still not be able to distinguish between a pair of point sources and a one-dimensional line segment source, for instance (but fortunately sources of that sort are uncommon in astronomy).
But they have. Many photos have been taken with telescopes with clock drives. Of course, the resolution of the photo is limited by the smoothness and accuracy of the clock drive.
Furthermore, it’s a common practice even now to take many, many, many photos of the same thing, and combine them to provide better resolution.
Sorry, no cite, but I’m pretty sure these are both very common, just from reading as someone interested at the hobby level in telescopes. (I haven’t had my own telescope since the 80’s, but I’m still interested.)
They’ve been using clock drives to take long exposures and see faint sources. But it doesn’t do a thing about giving higher than diffraction-limited resolution. If it did, as I say, they take much, much longer exposures and not try to build larger apertures.
And this is what I’m saying – beyond the diffraction limit you can’t tell whether it’s one source or two.
And if you’re not trying to tell by some sort of deconvolution, how would you be doing it?
You’re right, my bad. I carelessly mistook “diffraction limit” to be atmospheric disturbance. Ignorance fought.