Here is what I found that demonstrates forensic imaging. The amount of detail that can be recovered is fairly significant, but from this particular example, it’s not even close to what you get in the movies. Chas, now is this a typical example of image enhancement, or is technology significantly beyond this?
As Chas.E points out, they did. It wasn’t a great solution, though, because of the nature of the problem. Instead of focusing the image on the collector, the light was scattered around, with a good bit of it missing the collector altogether. Thus, the entire image wasn’t captured, and therefore it couldn’t be completely recovered.
Here’s one paper about it: Iterative/Recursive Image Deconvolution. Method and Application to HST Images.
Arjuna34
I’ve seen much better work, but alas, I’ve been unable to find a good web cite. Geez, you’d think people would be promoting their best work via the web.
I’m willing to admit you know more about Forensic Imaging Chas, but I can’t see how you can possibly derive more information from a blurred photograph without a certain degree of guesswork.
A photo is just information. Once you have reached the basic constituent part of the information then surely there is no more? To simplify; In a B/W photo the blob is a shade of grey, ranging from black to white. You cannot breakdown the blob into further smaller shaded blobs because the bounderies of these smaller blobs aren’t defined. They just aren’t there. All you’re going to get is an arbitary number of identical smaller blobs that amount to exactly the same thing you had to begin with.
Now, you can apply any amount of effects, filters, calculus and jiggery poker you want to on the photo, and come up with a best guess of what may lie within the blob. The guess may even be spot on. But it’s always going to be a guess because all the photograph can tell you for certain is that it’s a solid blob.
Isn’t it a universal truth; you can’t make something out of nothing? This addition detail has to have come from somewhere. It’s come from the process you’ve applied, not the photograph which is merely a starting point.
The only other route for investigation I can see is an analysis of the chemistry of the photo emulsion. It may tell you more about what kind of lit detail could produce what kind of blob, but there would still be a large degree of guesswork and selection within multiple possibilities.
That’s the whole trick. There IS information there, it just isn’t in visible form.
But information is on the film, it’s just invisible to the naked eye. There is information in those “featureless” blobs, even though it may not appear that way. None of these de-blurring techniques add information, just rearrange it so it’s visible to us humans.
Think of a camera, out of focus, with the out-of-focus image hitting the film. Now, theoretically, if you knew the precise way it’s out of focus, you could construct a lens that would go in front of the film to refocus the light. This “lens” can be a physical lens, or a digital one done mathematically after the fact, assuming the film captured the unfocused blobs accurately enough, and assuming you have some idea of the original image and lens, and misfocus.
The blurring function shape can be estimated with some knowledge of the blurring. Blur from an out-of-focus lens tends to create a smooth circularly symmetric lowpass filter, since most lenses are thin and circular. Blur from atmospheric turbulance tends to be gaussian. Blur from camera movement (or object movement) tends to have a sinc-shaped transfer function. Once you know the shape, you can often reduce the exact function to just estimating a few parameters. For a misfocused camera of known, if you know which camera and lens were used, the problem is merely estimating a single number, since the only variable would be the linear distance from the lens to the film.
Arjuna34
I considered starting a new thread for this, but I think it would fit nicelly in here.
I just read an article that claimed that Retinex is the best thing since sliced bread.
It’s a one-size-fits-all algorithm for enhancing images, purportedly influenced by the way the human eye-brain complex handles images. (retinex - retina, get it?)
NASA have invested quite a lot into making this a fully automatic procedure to treat satelite images, but claim that it works equally well for consumer pictures. The samples at their website are impressive.
I can’t wait to find a plugin for GIMP or Photoshop.
But before I get to excited I thought I’d put it to the test of SDMB:
Does anyone have any info on it?
Is it as good as they say?
Are there any plugins available?