Spy movie cliche: Cleaning up fuzzy pictures

You’ve all seen that scene in a movie where they take a picture that looks like a bunch of pixels and clean it up so it looks like a professional photographer took it. My question is: Is there really a computer program that can do this? Can I buy one?

Well, most movies start with a sharp image and blur it for the “before” shot, they’re cheating, but it’s only a movie.
But yes, there are image processing techniques to enhance blurry photos. A simple example would be the “sharpen” filter in photoshop. But there are much more sophisticated methods. You’d be astonished at how much detail can be recovered, but still, there is a limit to how much you can get from a photo.

Have to take issue with the use of the word ‘recovered’ here. The detail is either there, or it isn’t and no matter how much enhancement you want to do isn’t going to change this. That’s what makes it so much of a stupid cliche. The photograph has a fixed resolution and once you’ve magnified to the point where the photo is showing up as blobs then that’s it, there is no more.

Any artificial enhancement, like ‘sharpen’, beyond this is just guesswork. That doesn’t mean that it can’t be helpful or accurate, it’s just that you’re asking a computer to guess how it may look using the surrounding info and examples of similar objects (supposing you know what it is to begin with). It may get it totally wrong and it certainly isn’t ‘recovering’ detail. It’s more like making detail up.

Other useful enhancement, such as the use of false colour, is just a means of representing the same available information in a different format that our perceptions may interprete easier. But again, nothing is being ‘recovered’.

Futile Gesture makes an important distinction between enhancing info that’s actually there and making stuff up. Filters can help improve the view of information that’s already there up to a point. After that, the filters are inventing new information. Change the rules the filter follows, and it will invent entirely different information.

For example, I could take 100 crisp photos of various subjects and run them all through various blurring techniques so they all look the same. Now taking that blurry image, I have to apply the right filters and make the right assumptions about missing data in order to go back to a specific member of the original 100. Make a wrong choice and I’ll get another one of the originals or something entirely new.

There are programs that will run successive filters to show you what the clear image might have looked like, but the results are highly dependend on how the filters are configured. IIRC, this technique was represented well in the movie “No Way Out” where the tech said something like “this series might be a dead end and we’ll have to start over”.

On the other end of the realism spectrum was “Enemy of the State”, where they reconstructed 3D visualizations from a 2D photo and saw something that was obstructed in the original.

This is technically impossible unless you have two or more views from different angles. But it’s a common technique in 3D modelling, with the major drawback that it only builds one side of an object, the rear sides are still invisible.
I think you Futile Gesture ought to take a course in basic Forensic Imaging. It is quite possible to reconstruct information beyond what in a photograph (up to a point). But this is almost impossible to explain unless you are good with calculus. I have seen many examples of forensic photography where blurred data is sharpened up to the level of clear visibility.

If you have a horribly blurry photo, you have a horribly blurry photo. The Photoshop “Unsharp” mask is, to simplify it, basically a filter that increases the contrast of an image selectively. Usually, this involves the computer detecting edges within a picture (usually by comparing the contrast of neighboring pixels) and then it increases this contrast which seems to sharpen the photo. People who have experience with darkroom work will notice that printing a soft photo on grade 4 or 5 (high-contrast)paper will seem to be sharper than on grade 1 or 2 (low-contrast) paper.
Incidentally, there is a darkroom equivalent to the Photoshop “Unsharp” mask, but it’s a bit too complicated for me to go into detail here.

That said, a lot of information can be gained from photographs by simply playing around with the contrast and, if it’s a color photo, by isolating individual color layers (either red, green or blue; or cyan, magenta, yellow or black.)

But to go back to a perfectly crisp photo, as far as I know, there is no such thing.

The only possible way that I can see it being done would be through a program in which you can somehow recalibrate the focal distance, and the computer would rearrange the pixels in such a way to reconstruct the photograph with the newly inputted focal specs. However, I think there is too much loss in the visual data for this to be feasible.

And, micco unless I’m mistaken, isn’t there a certain amount of data loss involved in blurring filters? That is, if you take all those successive blurring steps it is, in effect, a one-way algorithm, like a JPEG, for instance?

Absolutely. That’s why I said you had to apply the right filters and make the right assumptions about missing data.

The whole point is that you can construct “clear” images from fuzzy images as Chas.E explains, but at some point you can no longer work with the information that’s actually there and you have to decide how to invent new data. Make one set of choices and you end up with a pic of a gun-toting terrorist. Another set of choices gives you a posie in a field.

Um… Perhaps you are not aware that the standard photographic unsharp mask technique was in use for many decades before Photoshop existed. The PS unsharp mask filter is an attempt to duplicate that effect digitally.
The difference between digital and analog worlds is quite dramatic. Analog processes can’t do fourier transforms and convolution on an image. These are the fundamental processes used in extracting more data from images. Yes, it does work.

Chas - Ok, I bow to you on this point. I’ve actually seen some pretty impressive image work done with fractals, and I’ll have a look into Forensic Imagaing. If you can recommend any sites or books, please send them onward. I’d be interested in seeing them…I’m just curious how blurred the photographs were to begin with, and how crisp they were at the end.

Chas p.s. I thought the quote you pulled demonstrated that I did know the Unsharp mask was in use before the digital version. That’s what I meant by “darkroom equivalent.” I suppose I should have stated this the other way 'round, saying Photoshop’s Unsharp Mask is the digital equivalent of this darkroom technique.

One more thing…Are these imaging programs available commercially? If so, are they simply cost prohibitive, or not approriate for Average Joe Photographer’s use? I mean, there is HUGE commercial potential here…

There are a gazillion specialized programs available for this, just go to google and search on “forensic imaging” and you’ll see lots of them. Most of this stuff is for doing mundane police work like enhancing fingerprints, I presume that there is an almost unlimited budget for law enforcement gadgetry.
However, you can do most any image enhancement you’d need in Photoshop if you use the Filter Factory plugin. This is a manual plugin that allows you to input your own settings for image convolution. You will get absolutely no useable results unless you know precisely what it is doing. And it’s totally undocumented, except insofar as the properties of each location in the transformation matrix has mathematically known properties. Yes indeed, it is very difficult to grok the math of image enhancement, but it’s a well-refined science thanks to NASA.
Another quibble that might be made is that there is no one-size-fits-all enhancement. If you’re looking to enhance the detail on that blurry image of a tattoo on the bank robber’s arm, the settings to bring that out aren’t necessarily going to give you a good image of other objects in the scene.

Most of the time I’ve seen this technique represented, it has been the generation of a good still shot from a series of videotape frames or a series of lousy still shots.

If you start off with only one lousy still, you aren’t going to magically turn it into a nice clear image. But if you have several lousy shots or frames to play with, you can do some cute mathematical tricks–essentially throwing away a lot of noise (which, being random, won’t repeat exactly from frame to frame) and retaining information that is repeatedly in the same logical location in all of the shots/frames.

Actually, I believe they tried to explain how they did this. That whole movie was MTV-edited for short attention spans, but I caught that they had several different camera angles, a necessity that Chas pointed out. And they even said that their model was incomplete, and they couldn’t be sure of their model because not every angle was covered. However (getting back to the OP), the images they used were video surveillance tapes, which would most likely lack the resolution necessary to create the detailed 3D model shown. There’s some of your “magic fuzzy reduction” at work.

I think something that hasn’t been mentioned is that, at least in my experience, complex digital photo-editing tools are fairly slow. Cleaning up a photo using a fractal algorithm would probably take hours. So how is it that these spies (typically using portable computers) whip up these crystal clear pics lickety-split?

Rising Sun is perhaps the worst perpetrator of these magic photo- and film-restoration tools. They restore an entire video sequence from a CD, purportedly using a computer program. The problem is, the data was gone. If it was saved to the disc in a digital movie format, the data that was changed would be lost, just like in your blurry photos.

A few years ago I read a novel by a Semi-Famous Mystery Author (I won’t say which one) in which the solution hung on pulling detail out of a photo. But the “method” they used was impossible, and simply laughable to anyone who’s ever read a newspaper. They put a magnifying glass to a HALFTONE (a photo that’s been shot through a screen and separated into dots, a technique used in most printed material today) and suppsedly saw some detail that they could not see looking at the halftone normally.

Friends, when you put a magnifier on a halftone, you see DOTS. PERIOD. No image. No “stuff.” Just DOTS.

Ugh. I’ll never read that author again.

A good word to search on is “deconvolution”.

Here is a site that sells AutoDeblur, and which talks a bit about deconvolution.

Here’s another site with some more information.

One thing to keep in mind is that the data is still there in a blurry picture, it’s just, well, blurred. I worked with 1-D deconvolution a long, long time ago. The big enemy of recovering the data is noise in the data. Given a blurred image with no noise, and sufficient precision arithmetic, the picture can be cleared up using even the simplest deconvolution method. In the real world, you need to use regularization, or you pretty much just get garbage. Also, you need a way of estimating what form of blurring took place, but this can be estimated from the image itself.

There was a good Scientific article on this about 20 years ago, where they were deblurring pictures. I don’t recall the exact year because … wait for it … it’s kind of a blur. :slight_smile:

I remember this article- it was one of the reasons I went into signal processing years later.

In addition to just “deconvolution”, you might get more specific results with “blind deconvolution”. The transference of light through a lens onto film is a signal transformation. If the image wasn’t in focus, what’s needed is a way to deconvolve the original signal from from the lens transformation. If you don’t know the transformation of the lens, then it’s “blind deconvolution”. In general this is impossible (it’s like taking the equation a*x=y, (a=original image, x = lens, y = captured image), where you only know y. In practice it’s much easier, since you can make good guesses about the lens (i.e. it’s round, symmetric, etc.), and you generally know when the image is deblurred (i.e. when you see a candidate a from a guess at x, you’ll recognize that it’s in focus).

All this presumes that the information is captured on the film, as Zenbeam points out. Here analog film is better than digital cameras. This cannot undo blurring from converting a bitmap to jpg format, for example - there, information is thrown away forever.

Arjuna34

Why didn’t they use deconvolution technology when they realized the Hubble telescope was focused on the wrong point? I mean, yeah, sure you want to fix the thing to get it spot-on without having to throw the data through some sort of algorithm, but in the mean-time, why couldn’t they just extrapolate it? Presumably, they knew all the data regarding the blurring, or else they would not be able to create the optical fix for it as they did.

They did. Good optics are a better solution.

I once read a mystery novel novel by a celebrity author where the identity of the murderer was determined by viewing a series of photos on a contact sheet. A problem with this identification came about when it was realized that the murderer was LEFT handed, and the person in the pictures was RIGHT handed. So they left the person in the photos off the hook. Later they realized that the negatives were printed backwards on the contact sheet.

Any darkroom tech would have noticed that right away by reading the edgeprint info on the negs.

And it was a real goood book till then.