My wfe has a small Casio digital camera. She has been using it for some time, and it takes satisfactory pictures of outdoor scenes or indoor shots of her friends, etc.
We never noticed any problem until she started taking pictures of her paintings. When transferred to my computer, we noticed a peculiar distortion of the edges of the pictures. All four sides are slightly bowed in toward the middle. I think there is a term for this, but it escapes me now. In other words, the edges of the picture are curved in, sort of concaved.
I first thought this might be some fault of the computer, or only because she was zoomed all the way in.
When I looked at the screen on the camera, sure enough it was the same problem. I tried it zoomed all the way in, part-way, and zoomed out for long shots, but in all cases this distortion was noticeable right on the screen.
Is this some aberration of the lens or the CCD chip? Any ideas?
All we can do is crop the pics on the computer to make them rectangular, but that of course eliminates part of the image.
It’s called “pincushion distortion” – the converse is called “barrel distortion” – and it’s a property of the lens. Lens-makers do try to avoid it, but it’s the result of balancing various bad things that lenses can do. You can fix it using programs like Photoshop.
Pin-cushioning or barrel distortion results from using a tiny tiny lens at its extreme widest setting. Actually it happens when I use my 17-40 at 17mm on a full-frame camera, too.
Best advice I can give is to not shoot at the widest setting, step back a few paces and use an intermediate focal length.
Another thought about what can cause (and exaggerate) distortion: You say she’s shooting her paintings. If the canvas isn’t perfectly parallel with the camera sensor, you’ll add ‘keystoning’ to your menu of distorted images.
Edit: Beaten! But I may add: correcting in Photoshop is possible, but not always for the layman. I prefer to take the time to shoot a good picture than waste time trying to fix a bad one.
Was there any proprietary software that came with the camera? This may include a simple program to correct inherent faults.
Either/or. I suspect it’s a plastic lens or cheap glass lens. There’s a whole slew of info on the better digital camera sites that go into great detail comparing most consumer digital cameras vs. the higher end “professional” cameras. Also, the CCD chips can add to the mix as well.
The distortion should increase dramatically toward the edges of the picture. One way to reduce distortion is to frame the picture more loosely and then crop in Photoshop.
You should get less distortion at the medium-tele end of the zoom, so taking a few steps back should help also.
Imaging sensors very rarely contribute to geometric distortion of the image. Sufficiently rarely that I have never heard of this happening outside of freak warped die occurrences that typically do not pass QA.
It is not the quality so much as the quantity. As a rule of thumb, each glass/air surface lets you compensate for one aberration:
Spherical aberation (soft focus on-axis)
Pincushion/Barrel distortion (the OP)
Field curvature (can’t focus edges and center at same setting)
Coma (Turns points of light into comet shapes at edges)
Spherical Astigmatism * (radial lines focus differently than concentric lines)
Others I can’t think of at the moment.
In addition, at least two lenses of different glass types (Usually crown and flint) are required for correction of first order chromatic aberation. (rainbow halos)
Notice I said “compensate” not fully correct. In addition, using aspherical surfaces can reduce the amount of compromising required, but these become expensive to do well.
Finally, zoom lenses throw a whole 'nother mess of variables in trying to compensate for aberrations at all zoom settings. It is sort of amazing that they were able to make them work at all before computers were available to speed up the iterative design process used.
The CCD offers an interesting possibility. A CCD with a barrel distorted array of pixels could be used to compensate for a lens with pincushion distortion. Of course it wouldn’t work with another lens. You could also try to make the software in the camera compensate at the time the picture is taken, but this would be an inferior solution, costing some resolution.
*This form of astigmatism happens with lenses that are accurately spherical. The other meaning (that your eye doctor uses) is due to the lenses having a barrel shape.
Actually, you can see on telephoto zooms as well. It might not be that noticeable, but it’s there. With the Nikon 70-200, for instance, it starts out as pincushion distortion at the 70mm range, flattens out somewhere in the middle, maybe in the 115-135 range, and then is barrel distortion the rest of the way up. It’s pretty very subtle, but it’s there. Same with the Canon 24-70mm f/2.8L. Pretty noticeable barrel distortion at 24-28mm, levels off at around 35mm, turns to slight pincushion by 70mm.
The “one surface per aberration” rule holds for spherical surfaces, which was all you used to have for glass lenses. Nowadays you can make inexpensive plastic replica aspheric lenses, for which the aspheric terms allow you to compensate for more than one error per surface.
The other aberration you’re trying to think of is chromatic aberration.
By the way, folks in engineering-related optics never say “Spherical Astigmatism”. It’s just “Astigmatism”, with Tangential or Sagittal as qualifiers. Is this another of those cultural divides between optometry and physics?
Is this kind of distortion blamed on the lens? Seems to me that it’s just an effect of perspective.
If you’re fairly close to a rectangular painting and take a picture straight-on, the center of the painting will be closer to the camera than the left and right edges, therefore the left and right sides should appear shorter than the height at the center.
Similarly, the top and bottom edges will be reduced in comparison to the width through the center.
These two together would cause a pincushion effect, and really wide lenses it’s more extreme and it’s called “fisheye.”
Seems to me that cameras would have to have this kind of image in order to accurately represent what image is coming into the camera. Is it considered a problem?
Distortion is a well-known and well-characterized defect of many optical systems. It differs from the perspective effects you discuss, and is, in fact, defined as the difference between the “ideal” image you get from perspective and the reality the lens produces.
It’s generally not ideal, but can be used for effect. (See: fisheye). However, you have ultra-wide angles that minimize such distortion, called “rectilinear” lenses. You’ll still see the effects of perspective and an ultra-wide angle, but not the barrel distortion caused by the lens. Here’s an example of the same focal length, one rectilinear, one fisheye.
Since you can build different lenses, one with pincushion and another with barrel distortion at the same focal length, no, you don’t have to have this kind of image. With some kinds of pictures it’s not a big problem, because you don’t see the distortion. However, with pictures including subjects with straight lines, it’s a problem because the lines don’t look straight. Examples where it matters are taking a picture of a rectangular object like a painting, where you want the edge of the painting to be the same as, or parallel with, the edge of the photo (as in the OP), and taking pictures of buildings, where you want the straight lines in the buildings to look straight.
I mentioned chromatic aberration separately because you can’t correct it by adding surfaces, only by adding glass with a different dispersion.
In fact I have only heard it called astigmatism with no qualifier. “Spherical astigmatism” is a term I invented just for that post. When I first studied it, it caused me a lot of confusion as I was already familiar with the meaning used by eye doctors, and I was trying to make the point that it is not caused by a deviation from spherical surfaces like the optometry kind.
Physics vs. Optometry, heck if I know.
I’m just an electrical engineer and the semester of optics I had in college really didn’t get into off-axis performance. Years later I got into telescope making, and am mostly self taught from the amateur astronomy literature. It turned out that I was able to use this to design some macro attachments for an employer.
All that was over a decade ago, though, so I was assuming I couldn’t come up with all the second order abberations from memory. The last of those macro attachments appears to still be in production: http://www.canberra.com/products/724.asp
My electronic designs should have such a market life.
OK, thanks. But going back to the OP, what he’s describing sounds like it could be just properties of the “ideal” image that he gets from perspective, and not lens distortion. Everyone here’s jumping onto the distortion explanation, when that might not be the case.
Well, I used the wrong word in my earlier post. The ideal camera should make an image of a rectangle, if you take the picture straight-on, with the rectangle’s sides being bowed out. That would be barrel-shaped (not pincushion like I said earlier), and seems to me could be completely explained by perspective effects without any lens distortion.
I guess I’m assuming that an ideal camera would make an image that a measured distance between two points on the image would correspond to a specific angle that the light made when it came into the lens.