And focus is one difficulty we won’t have, here. When your entire optical system has to fit in the size of an insect, your focal length will necessarily be much less than the distance to whatever you’re looking at, so you can just set your focal distance to infinity and glue it in place there.
We can use a thought experiment to describe the problem. Light has wavelengths, which we can think of as sizes, like grains of sand. Red is one size. Green another. Blue a third, etc.
We can picture our camera as a sheet with a small hole punched in. A distance behind the sheet is a piece of paper with glue on it.
Now we want to create a picture on the glued paper by throwing the different size for each color grains of sand through the hole in the sheet where they will then stick to the paper.
This is roughly analogous to the way a camera works. The reflected light off the object being photographed is constantly throwing grains of sand at the sheet.
This setup reveals a few inescaple factors. First, the smaller the size of our sensor (the glued paper). The smaller the number of grains of sand it can hold, the lower the resolution of our camera. We can get around this drawback by having multiple pieces of glued paper ready, and switching one out when it gets filled. If we do this four times, we can average out the four pictures and combine them into a larger picture, giving us higher resolution than we would have if than before we created our composite. This of course will take four times as long and is subject to several types of distortion. If the object of our picture is moving, the time it takes to make the picture will get blurred.
This leads us to the second problem, which is the size of our aperture, or the hole in the sheet. The smaller the hole, the fewer grains of sand can get through it at a time, the longer it takes to draw the picture, the more distortion and blurring we will have. Remember, that all wavelengths of light are not the same size. In our analogy, a red grain of sand might be twice as large as a blue one. As we throw grains through the hole, the red ones are a lot more likely to catch the edge of the hole and get deflected than the blue ones. Our image will have more blue and less red than the original. It won’t be an even blue tint either. Red heading towards the center of the hole will be less likely to deflect than red grains thrown towards the edges. The blue shift will be smallest in the center, and greatest at the edges. More distortion.
A third problem is that the center of our picture will be created by grains thrown straight through the center of the hole with the smallest distance to travel, and the smallest chance of deflection. The ones towards the edges will come in from a wide angle, have to travel further, and have a better chance of getting deflected. Anyone who has taken a selfie with their face close to the camera and noticed that it makes their nose look big will be familiar with this type of distortion.
That analogy covers the basic problems with camera size. In analog cameras these are partially corrected by playing with lense shape, focus, etc. in digital cameras we can add in software corrections. For example, in our colored sand grain problem, we know that reds are more likely to get caught than blues. If a red grain has a 1 in three chance of getting deflected than we know that we just need to add 1 red grain of sand for every two that we find on the glued sheet. We can correct for most of the other distortions using similar tactics. These tactics all create another type of distortion. If you’ve seen an overprocessed digital picture, and noticed how it looks like a painting, with a lot of subtleties of color and texture absent, than you know that has been distorted by these corrective algorithms that have averaged out the detail.
Clever compromises between aperture size and shape, sensor size and shape, and software corrections, as well as “bursting” to create composites are what has allowed us to create very small yet reasonably high resolution cameras like we have on our phones.
If you want to ask how small is usable you have to define usable. If the hole in your sheet is big enough to let through one grain of sand, and your glue sheet big enough to catch that grain, you have a “usable” camera. A grain of sand on the glue indicates that there is something casting sand at the direction of the sheet. That’s usable visual information. Turning our analogy back to reality, this equates to a camera the size of a few molecules at its most basic. The more detail and resolution you want and the faster you want it, will determine you minimum camera size offset by how clever your software corrections are.
To give you an example of this in action, let’s look at it from the opposite perspective. If I go to a huge astronomical observatory and take a picture through the telescope with my large sensor disorder, I can very quickly get a pretty detailed and accurate picture of mars. That huge telescope is gathering a lot of light “grains” and throwing them on to the big blue board that is my camera’s sensor,
If I take my pocket camera with a small lens and sensor, put it on a tripod to keep it steady and take a quick picture of mars, I get a vague reddish blur. If I take a thousand pictures, correcting for mars’ relative movement, and the I layer them one on top of the other, I get a more detailed picture, maybe one that at first glance looks as good as the one I took through the telescope. It just took a lot longer. Further examination might show that the two pictures don’t agree with each other. A lot of the detail in the composite picture isn’t actually true detail, but artifacts or noise that has crept into the picture as the result of all my tampering.
In engines they say there is no replacement for displacement. In cameras there is no replacement for aperture size, sensor size and time of exposure. Those three factors (assuming your object is perfectly stationary for our purposes determine your resolution. The lesser those values, the lesser your resolution.
So, again, the definition of usable comes down to what your resolution needs are, and tat will determine your minimum size.
I hope that helps. It was fun to write, anyway.
Your second statement, taken at face value, isn’t true. No matter the size of my lens, I can make the focal length anything I want. A disc of glass with flat faces on both sides has an infinitely long focal length. Curve one side only slightly and you have a very long focal length.
what I think you mean is that if you want to build a miniature camera you need a very short focal length, since your film, array, or retina must be placed at the focal point for the lens.
Here’s one option to consider – if your flying robot is going to resemble an insect, why not make its eye resemble an insect’s eye? In other words, rather than having a single lens imaging your scene onto a screen/retina/array/film, build your image sensor up out of an array of detector elements, each with its own lens. In effect, you’re building a camera out of an array on one-pixel cameras. At that point, a lot of your concerns about the diffraction limit and sharpness of image go away. All you have to do is make the lens and the barrel of the conduit leading down to the sensor efficient at rattling the light down to the sensor. You don’t have to worry about sharpness – the image is as sharp as your packing of individual elements. You don’t need to worry about light from one pixel flooding over onto the next.
They’re already pursuing this approach for micro-drones:
Won’t work. Although you have prevented the source path for each pixel from affecting the pixels, the opening of the barrel and any lens at the top are still affected by diffraction - they will see light from a wide angle of sources, and so the whole device will still have blurring and a base resolution limited by aperture. Something about the price of lunch here.
Yes, but the fact that insects use such a system shows that it works, and on a small scale. And people wouldn’t be pursuing this as a viable strategy if it didn’t work.
In short, you can’t scale this down to arbitrarily small sizes, but that doesn’t mean that it doesn’t work, or that it might not be better than single-lens imaging over a certain range, lunches be damned.
CalMeacham, I really need to break this bad habit of making optics mistakes in threads you’re going to read.
EDIT: I think the root of my mistake was the fact that you can’t make a big lens with a short focal length. But of course you can make a short lens with a long one: Heck, you can make a small lens with an infinite focal length.
Well, you CAN make a big lens with a short focal length, but the shorter you make the focal length relative to the diameter, the harder and more expensive it’s going to be. Those are the cases with really low f-numbers*, having great light-gathering ability, like the one Kubrick used in filming Barry Lyndon that let him shoot almost entirely by candlelight.
*f/number is one of those insane figures-of-merit in which the smaller it is the better, in general. For an infinitely distant object, it’s just the focal length divided by the diameter, and when you try to build something with a low f/number, or even just try to buy one, you find out how hard and expensive that is. As a rule of thumb, f/numbers of 1 and up are easy, but it gets harder as you try to get smaller than one.
A pinhole lens is not the answer to the insect drone question, but pinhole lenses can produce amazingly good images. In fact, the world’s first photograph was taken by a pinhole camera, with no physical lens whatsoever:
Here is a better image taken by a pinhole cameras:
Some of the smallest imaging lenses are for microscopic fiber optic borescopes. This one is 0.35 mm diameter: Borescope | Videoscope | Pipe Camera | Fiberscope
Well, at some point, your lens ends up being a complete sphere, and you’re making it out of the highest-index material you have available. Once you get to that point, you’re not going to be able to push the focal length/diameter ratio any further.
And then you have a spymicroscope. Who knows what those pesky paramecia are up to.
Going out on a limb with my first post, ha ha. I don’t know what to think about that one, since this camera wouldn’t necessarily be the only component of the robot. If, for instance, it’s just some bug-like camera that you can stick someplace inconspicuously, you might have more feasible camera space. If this thing needs to actually fly though, you’ve got a whole other set of challenges on top of condensing the camera drastically.
But maybe we’re overthinking things. What if this “camera” is just a little bio-feedback gizmo that can tap into an actual fly’s nervous impulses? I know it sounds cheesy, but we’re talking about spy flies, right? Even if you couldn’t control the animal through the interface, would it be possible to leech visual stimuli and convert it to something that could be worked with?
Edit: Pardon my lack of context - I didn’t see all of the above discussion.