I know there is ambiguity in the question. What kind of “blurriness” am I talking about?
Well, I’m hoping knowledgable dopers can say something about different ways to disambiguate “blurry” as well as saying something about the possibility, at least in theory, of reconstructing the original sharp image from blurry (or various kinds of blurry) images.
Ultimately it’s a matter of information: according to information theory, everything can be codified as a certain number of yes/no bits. A blurry image (I presume you mean a photograph or digital file) contains only so much information, and that’s all you have to work with. With anything but a hologram, and perhaps even that, you throw away large amounts of potential data every time you record an image. You can make what data you’ve got more significant by various tricks such as adjusting contrast, etc.
One trick that can be done with video images depends on the fact that you have more than one frame of a scene: if one second of footage shows a nearly identical scene (little or no movement) then you can combine the data from 20+stills and interpolate a finer resolution. Astronomers have created low-res maps of Pluto and Charon by collecting light curve data over a period of time and running the numbers through a computer.
I have heard that a certain kind of blurryness can be corrected, IIRC it was the type when an image is out of focus, as opposed to the cameria is moved during the pict. it was a few years ago, so technology may have improved
The image can be reconstructed, simply because you know that it’s blurry. The human vision system is very good at recognzing edges between objects, and at least in theory that sort of capacity can be extended to computers.
I guess on second though I had in mind specifically images that are made using an “out of focus” setting on a lens, or anything equivalent to that. (Kanicbird’s post mentions this kind of blurriness.)
If I’d had to WAG, I would have said an out of focus image has the same amount of information as the equivalent in-focus image, and that the information in each is related in such a way that you could construct either starting with just the other.
But does anyone know whether this is true? Kanicbird mentioned reading that it is true, once. Anyone have any further info?
If you knew the path the camera moved in, relative to the subject, during exposure, could you not deconstruct motion blurriness and get the image you would have had if the camera were still?
Likewise, could you back-calculate the effects of shifts and distortions in the intervening air to recreate the image you would have gotten through clear still air? I’m sure I’ve read of this being done with telescopic observation.
The problem with this is knowing what the motions and distoritions are…
To an extent you could. The problem is that if the blur is covering something, regardless of whether you can “remove” the blur, it doesn’t do you to much good if you don’t know what to replace it with. Similarly if the motion blur squeezed down a certain thing–for instance a tattoo–into a line or something, unless you’ve got really small pixels most likely the fine details will have been lost by being compressed down smaller than your pixel size. From there, even if you can put it back in it’s general position, you’re only going to be left with a colored patch, not a detailed representation of the tattoo.
When they do that for astronomical purposes (as I recall) they bounce a laser off the moon. This allows them to know how the air is affecting the path of the light and then they are able to undo it (with higher accuracy.)
With motion blur you know at least in what direction things were distorted. With atmospheric blur, things could have been shifted in any direction–some sections expanding and others shrinking. Without having some way to figure that out (for instance bouncing a grid pattern off the moon) you just wouldn’t know what way to move things.
The problem with this is knowing what the motions and distoritions are…
[/QUOTE]
I think there are two main issues here. The first, which has been pointed out, is whether we know what the process was underlying the distortion; if, for instance, we can’t know (or make a reasonable guess) about how out-of-focus the image was and what the optics of the system were (focal length etc), it would be impossible to undo this process. This seems surmountable to some extent though trial-and-error: try various manipulations (rearrange the pixels to simulate different focal points or remove motion, for example).
A tougher problem is whether the system acts linearly, having to do with the question of information loss discussed above. Blurring basically means that light coming from a single point in space lands on more than one place on the imaging medium, and vice-versa (light from multiple places landing on the same location in the imaging medium). If, when these photons land on the same place, the imaging medium adds their effects linearly (meaning it total_effect = effect_photon1 + effect_photon2, with no non-linearities such as saturation), you could undo this in principle because there’s no information loss. But I think it’s a good assumption that most imaging media are very far from behaving linearly, which means that the blurred image will have lost information about the original visual scene that can’t be recovered.