It makes little difference what sort of “lens” and ant uses. the diffraction limit is independent of whatever optics are in use or even the absence of optics!
first lets address the issue: do ants have lenses on their eyes. The answer must be “yes, but not conventional looking lenses and in particular they lack the image plane retinas found in a conventional eye.”
To function at all the eyelets must have some differential sensitivity to light impinging on them from different angles. If this were not the case then they could not see! for example consider a light bulb place ten feet away from the ant. the same number of photons per square inch is falling on the surface of each eyelet. the only difference between each eyelet is that the angle the light is striking it is slightly different than its neigbor. The eyelet therefore must somehow convert this angular information to a signal of some sort. Assuming ants “see” at all then the eye combined with the brain must use this this information to back-calculate the source of the light and form a mental image.
In any case this amounts to a “lens”. If you insist on dicussing mechanism than I would speculate that the eyelets are probably tubular light guides in shape with opaque sides so that much like looking down the core of a toilet paper roll the amount of light reaching the end of the tube increaces the more directly it points towards the source.
In any case the mechanism is IRRELEVANT. the diffration equation is based on the physics of light propagation and information retrivieval. It says in effect the following:
" given you can collect any and all information about the direction and position of photons passing through (or striking) an aperature of a given size you can predict the common origin of those photons to such and such an accuracy" this limit on the accuacy is given by the blur circle formula I gave above.
It does not matter how you make the measurments. It does not require a lens. Its simply an uncertainty principle (mathematically identical to the hiesenberg uncertainty principle in fact).
That being said, most detection systems do not achieve this theoretical limit. like I said, some telescopes are crappy ones. But it does set a lower bound.
Thus ants cannot see BETTER than the limits I gave.
To clairfy terms in my last epistle:
the “aperature” we are talking about is the size of the eye. That is the ant is collecting light that falls on that aperature. (we can also, clouding the issue slightly, discuss each eyelet as an aperature too–this would tell us how much information each eyelet can contribute individually).
Now if the ant tils it head or moves then the aperature shifts over in space. One of two things can now take place. If he looks in a new direction then he has effectively expanded his filed of view. If he looks back toward the same object from a slightly displaced viewpoint then By doing so he gains new information which when combined with the previous point of view can better triangulate the position and size of an object. Thus in this sceond case his effective aperature increaces and theoretically his resolution does do. In the first case his field of view increaces.
Its very hard for humans to imagine this since our brains dont seem to work that way. that is what we “see” in our brains is whaterver is currently on our retina. We dont mentally “see” an image that is currently greater than our instantaneous field of view.
We can however imagine that this is possible and indeed we routinely build machines that see this way.
two examples: we’ve all seen pictures of stars fields and planets, or sattelite views of the earth that are made up of many pictures glued together. This is the first case of extending the field of view. Likewise many camera shutters are acutally slits that sweep across the film: thus at any one instant only part of the film is recieving the image. (imagine an ant racing along the film always staying in the region exposed to light.)
Examples of imaging systems that gain greater resolution by adding together views taken from different angles are less common to the layman but this is done. examples include cat scans and MRIs, very long baseline radio astronomy, and Synthetic aperature sattelite imges.
In any case, the underlying points are the following:
1)the smallest resolvable spot is detemined by the total aperature.
2) the maximum number of spots you can see at one glance is limited by the number of detectors (pixels) you have
3) but if you scan you head you can increace the number of pixels.
4) depending upon how you scan your head you can either spread those pixels out over a large area increasing your field of view at the same constant resolution as before or
5) you can improve your resolution, effectively concentrating those pixels into a smaller area.
6) in all cases the maximum resolution is defined by the effective aperature size (i.e. the sum of many separate aperatures, a moving aperature, or a single aperature) which gathered the photons. It is not defined by the number of detectors or eyelets.
That being said, it may be technically easier to achieve the theoretical maximum resolution through the use of more detectors (eyelets) than by scanning a single eyelet.
>We dont mentally “see” an image that is currently greater
> than our instantaneous field of view.
No, but we do “see” an image with more resolution than the retina allows for, apparently by integrating (not in the math sense) the images from small eye movements.
cool. I beleive it. so there you go, the nuber of detectors does not fix the limit on resolution.
human irises are about 7 mm in diameter. If we had perfect eyes how well could we see at distance?
applying the same formula then:
one should be able to just resolve two object 2 meter (6 feet) apart at a distance of 2100 meters ( about a mile and a quarter )
or stretching the other direction a a distance of 1 foot from our eyes we should be able to make out objects separated by about 80 microns, something a little smaller than half the size of a fine human hair.
I’d say these aren’t awfully far off (lets say within a factor of two) of my own vision which isn’t perfect.
Which means that human eyes see at a level that is reasonably close to the theoretical maximum.
That is, unaided human eyes could perhaps see no better than about twice as good as they do. to see better would require larger eyes.
The optics of the eye may resolve as you say, but the retina (absent minute eye movements coupled with brain activity) cannot resolve anywhere near enough pixels.
Why do we have to keep revisiting this pixelation red herring. Have I not stompted on this notion already. If the ant moves his head just a smidge he can compensate for having fewer pixels. New view == effectively more pixels. What limits his vision is the size of the eye (i.e. the aperature). As long as his aperature is small his vision is so blurry that he doesnt need a whole lot of pixels.
But the ant’s eye is not a still camera. We do not know (at least, we on this board do not know) enough about the ant’s eye-brain mechanism to know whether its model of its visual environment is enhanced (as is known to be in the human case) by processing successive back-of-eye images.
Neither pixels nor aperture necessarily provide the limit of the combined eye-brain system.
So to summarize.
without resorting to eye-brain enhancement tricks, the aperature of an ant’s eye sets an upper bound on the quality of an ant’s vision. This limit is approximately summaraized by the statement that an ant can not possobly resolve anything smaller than another ant sized object more than several inches away. This is an upper bound that effectively assumes that the ant had an infinite number of pixels in his eye. Since ants have a finite number of pixles his/her vision will be worse than this.
In short, it’s a mr magoo world if your an ant.