My question is prompted by the Digital SLR question. Lenses basically bend light. can their action be emulated by a computer? if so, why make lenses anymore? they are ewxpensive and involve many critical grinding and polishing steps, which are easy to screw up. can a suitably fast microproceesor emulate and do away with conventional lenses?
How are you proposing to bend light using a microprocessor? I’m not sure I understand how that will replace a lens. Computers are definitely used to simulate the bending of light while designing lenses, but how would they actually do it in a camera?
That is a really good question, and I look forward to the experts answers. I know software that basically acts like a lens such of photo stitching software and software that corrects barrel distortion and other abberations. I don’t think you are going to be able to get away from needing a lens for magnification, otherwise you wouldn’t need telescopes.
I don’t understand what you’re driving at. How do you expose an image to the camera without a focusing agent? Without a lens, all you’ve got is random photons bouncing around. The image must be focused onto a two-dimensional plane in order for it to be recorded, either on film or by a digital sensor.
You can simulate lenses in raytracing programs - but that’s not really relevant here - it’s hard to get an image of a real-world object without using something to focus the incoming light on an image sensor. Without the lens, light enters the sensor from all directions to every pixel, so you get a fairly uniform grey image.
Many digital cameras offer a combination digital/optical zoom feature, where you can “zoom” in or out either by using the zoom lens, or the camera’s computer. The digital zoom is much less desirable, since it is really the equivalent of cropping the picture and then enlarging it to the original size. This gives more noticeable pixellation than the optical zoom.
Actually, I suppose it would be possible to have a pinhole camera with a very sensitive CCD, or to construct a sensor array that had a grid of pinholes over it - one for each pixel - alignment of these would be a bit of a bugger, but I’m sure it could be done. Still, that’s not using a computer to simulate a lens (I don’t understand where the OP is going with that)
Your computer needs something to work with. If you just put a sensor in place it will gather the signals from all the light waves impinging on it, and give you a result that is the incoming light, multiplied by the response function for each wavelength (and polarization, and other factors) integrated over the sampling time and give you … a number (of volts, or whatever). One data point like that doesn’t have the information to generate a picture. If you put up a whole bunch of sensors, you’ll get a bunch of such numbers, but most of them will be pretty close to the same number, and that won’t get you anywhere. The way a camera works (or your eye) is to produce an image that coincides with the locations of those detectors, so that when you have a bunch of them you can build up an image out of the mosaic.
There are other ways of gathering the information needed for a scene – you can get the info from the fourier Transform plane, or you can record an interference pattern, as a hologram does. But it seems to me that you have to have something else present (the lens, for straight imaging, or fourier transforming; a reference beam for holography(not to mention a pretty well-controlled environment)) in addition to your array of sensors/sheet of photographic film in order to gather the information you need to create a picture.
In fiction, people usually gloss over this part. LARRY nIVEN WAS PRETTY VAGIE ABOUT WHAT WENT INTO THOSE “NETS” SPRAYED ONTO SURFACES IN tHE rINGWORLD tHRONE THAT MADE THEM CAPABLE OF IMAGING, BUT i’LL BET THEY’D HAVE TO BE MUCH MORE THAN JUST AN ARRAY OF DETECTORS. i CAN’T FOR THE LIFE OF ME SEE HOW THE SWARM IN mICHAEL Crichton’s “Prey” saw – they were a set of uncoordinated individual sensors.
Allow me to join the chorus of people not understanding the OP. You’ll need to focus the incoming light somehow, so you need to pass it through some physical medium. Right now, good glass lenses aren’t really that expensive or difficult to make. They’re extremely fast and don’t hinder the passing of available light. Any other scheme would have to be an improvement over this, and not consume a huge amount of space or power in the process.
Oddly enough, in a recent thread about Sudoku Sleel posted THIS LINK to an article about using a mathematical method to decode the information generated by x-ray diffraction, giving a viewable image. Apparently the missing ingredient CalMeacham was thinking of is the phase of the incoming light waves.
Suppose you had an array of sensors-millions of them, each one could record the pahse and wavelength of the light impinging on them, The computer tales the outermost picture elements, and changes the phase, by a fixed amout, let is say 5 degrees, an amount which decreases, as you move toward the center of the array. In the center, the phase change is zero… Now, if the computer could convolve all of these waves, you should be able to generate a magnified image 9or hologram). Would this work?
I want what he’s smoking.
It’s a lot more complicated than that.
If I have monochromatic coherent light coming in, then I can take the phase and amplitude information and reconstruct a hologram.
But light in nature isn’t monochromatic and coherent. It’s got a range of wavelengths, and it’s partially coherent, or incoherent.
Good luck trying to reconstruct a hologram from incoherent light.
There’s a reason they either use lasers or extremely short distances between the object and sensor.
Why do you say that. that is sort of how phased array radar works. A bunch of antennas where you can measure the pahse and frequency of the radio. By changing how you integrate the energy you can steer the direction of the antennae without physically moving the array.
I have read papers about making cameras with only a few simple sensors and a method for vibrating the lens sampling the sensors many times to create a picture that has many more pixels than sensor elements.
I know zip about this sort of stuff, and I can’t quite follow what this means, but I googled “phased array radar” (which I know a teeny tiny bit about) and ended up finding phased array optics.
I don’t follow this, either, but it seems like there might be some relationship there. Or maybe not. Maybe someone smarter than I can explain if there is or isn’t.
That is all.
Holographic photography does not use lenses and does not need incoming light to be focused.
No, but it requires other things – either a coherent reference beam (also coherent with the illumination), or else it requires an onject placed close enough to the film or sensor so that the illumination and reflected light can interfere with itself. I alude to all this above – you can’t just go out with a camera and snap a hologram.
By the way, it’s not really correct to say that, even with monochromatic coherent light, you need amplitude and phase info to take a picture. That would be true if you wanted to reconstruct the wavefront (from which you could get a picture_), but you can ttake a recognizable and perfectly satisgfactory picture with a great deal less information – that’s what a traditional camera or a CCD camera do, and why I was myself so vague – we don’t need to reconstruct a wavefront to get a picture. But we do nood more information than a sensor alone, or even an array of sensors would give us.
And again, most sensors (photographic fiolm and CCD planes included) respond to intensity, not amplitude. They don’t record phase. It’s far from trivial to record phase directly with detectors at visible light frequencies. Holograms effectively do it indirectly, by recording the intensity of the interference between a known reference and the wavelength of interest.
I’m going to have to disagree with CalMeacham on this one (wow), but with reservations. I do not have too much time to elaborate, being at work, but I will do my best.
You can do what ralph124c said and back out the electric field distribution at some arbitrary plane of an optical system using deconvolution techniques to recover the phase at the camera plane, but these techniques usually require some a priori knowledge of the system (including the point spread function) to get any kind of meaningful answer. These techniques are used, as Ravenman found, in phased array optics, or more specifically in interferometric telescopes. These work because the light from stars is spatially coherent due to the long distances the light has traveled (light from an extended emitter gains spatial coherence as it travels, even light from the sun is coherent over 8 microns or so IIRC, this is why holography and interference patterns were known of before lasers were invented). These methods are also used in adaptive optics (remove random phase aberrations due to atmospheric effects). This is not my field, but if you are really interested a good place to start would be Goodman’s Statistical Optics or even Born and Wolf (though my guess would be that Goodman would be a better resource). There are also some good papers out there on deconvolution techniques that could probably be applied without too much work (Fienup algorithm (1978), blind iterative decovolution (for example Gerchberg and Saxton 1972, Ayers and Dainty 1988), and more recently the Pixon method (Puetter and Yahil 1999)).
The real question, however, is why would you want to do this? Using a lens to get the imaging condition is too easy with incoherent imaging not to make use of. If you are doing coherent imaging (which is essentially what I do for a living with a holographic system), the techniques are very handy, but require a very complex channel for data processing and some knowledge of what you are trying to accomplish (we were trying to correct a degradation of the PSF due to a known aberration in the system). I, and my coworkers, have investigated these techniques, but have avoided them because of this complexity and limited functionality.
Eyer8 – i ain’t sayin’ you can’t recover phase (Dainty, who you cite, was one of my professors), but it’s certainly not trivial, and without a doubt isn’t the basis of a system of photography-- you needs gobs of computing time and capacity. But it’s not a reasonable basis for a system of photography. Even the idea of recovering the wavefront, as I say, is ludicrous overkill. You can get a perfectly good photo with a heckuva lot less information.
If your goal is to take photos using a lensless detection system and easily portable equipment, I think you’ve got a serious uphill battle against a cheap lens.
I completely agree.