What's the best way to artificially "focus" unfocused digital photos?

Ah, memories of Paris in the 90s: my wife leaning out a garden apartment window looking happy and beautiful, and me not knowing that with an autofocus camera you have to wait a moment for the optics to focus before rattling off that perfect shot.

The data just isn’t there, right? The edge-sharpening routines I’ve used have always come out looking weird (for these purposes).

Has anyone confronted a similar problem?

Thanks.

More or less you are right - the data isn’t there.

The best (and only thing) I have seen is:

Shows examples - which while impressive - still probably won’t be what you want

Unsharp Mask can improve mild focus errors.

Is the problem definitely focus, or could it be subject motion blur or camera shake? Those are harder to deal with.

You can do a better job of de-blurring if you know what caused the blurring. Out-of-focus will cause points in the image to be blurred into circles, while camera motion will cause blurring in the direction of motion, and not in the other direction. Those de-blurred examples are working with a single type of blur that extends over the whole image. You can see in the software screen-shot that they selected “Out of focus blur”, and a couple of parameters for that type of blurring (“radius” and “Smooth”).

Even if they didn’t initially know what caused the blurring, it could probably be determined from the image, or just by trial-and-error.

If your wife is moving, with different parts moving in different directions, you’d have to have a different de-blurring function in different parts of the image, possibly varying continuously across the image. That would much harder to do, both to determine the type of blurring at different points across the image, and to then remove it.

The Wiener filter was invented in the 1940’s. It, or one of several related filters, was used to produce the images in the linked-to article. I’ve never used such a thing but my job made me aware of them two decades ago. I’m afraid that losses to my memory (or wits in general) during those decades may have introduced errors or misconceptions in the following brief comments. :dubious: (Among the more complicated relatives of the Wiener filter is the Kálmán filter, but I knew little or nothing about it even two decades ago.)

Such filters have three limitations: (1) the blurring function must be known, or deduced from the blurred image; (2) the filtering is performed in Fourier domain: time-consuming and inconvenient. (Do programs like Photoshop even offer Fourier transform as an option?)

Limitation (3) is noise. In the absence of noise, removing blur would be trivial – you’d just apply the inverse filter of the blurring filter! In fact it is only the coping with such noise that makes the Wiener filter and its relatives non-trivial.

The first image pair in the linked-to article shows spectacularly good deblurring, but (as the article states) the “blurring” in the input image was simulated digitally, so that the only noise was the arithmetic noise introduced when each “blurred” pixel component was rounded to a fixed-width integer. And I’ll bet that “blurred” image didn’t use the 8-bit components of typical 24-bit images, but rather components of, perhaps, 16 bits.

I know GIMP has a plugin. It also has a deblurring plugin, but, as you say, you need a lot of information about how the image is blurred for it to do any good. I personally consider it rather useless.

I’m not entirely sure why blurring in the Fourier domain is time consuming. You convert the whole image to a Fourier analysis, then do the work, then convert it back. It’s about as quick as any other photo editing option.

A suggestion. One cool thing I’ve found to do with photos that, that while the subject or view is neat/meaningfull, but the picture quality is somewhat (or very) lacking, is to use those various photo editing program effects to go the opposite direction. Make them sepia tone. Or use the “oil painting” effect. Things like that. A couple of my favorite photos are ones where I went all artsy fartsy with the effects. You generally don’t need a very good photo for that to work out well. And as a bonus you get to show off your artistic creative side (rather than admiting you don’t know how to use a camera properly :slight_smile: ).

^ In my opinion, that almost always makes a bad picture worse, but if you’re happy with it, that’s all that matters.

The best deblurring results I’ve seen was that Smart Deblur program linked to in DataX’s post. However, it does leave much to be desired if you want an aesthetically pleasing result and are not just trying to extract information from the picture. I mean, it is damned amazing what it can do, and if your focus is only off by a couple inches, it may produce an aesthetically acceptable result. But if it’s severe back-focusing issues (like the camera focused on the background instead of the foreground subject), you get a lot of funky circular artifacts in the picture.

The technical terms needed are convolution and deconvolution. Many things can be described as a convolution, and focus blur is one.

The reason that Fourier space is useful is that if you take the Fourier transform of the picture, and take the Fourier transform of the convolution, and multiply the two, and then take the inverse Fourier transform (which turns out to be essentially the same as taking another Fourier transform) it is the same as applying the convolution to the image. It also means that deconvolution functions can be described, and applied in the same manner. (A deconvolution is is still a convolution, it is just the inverse of some other convolution. Knowing what your focus blur convolution is - which is best done by knowing what the parameters were, but can be estimated by some more advanced techniques, allows the deconvolution to be created.) Doing all of this in Fourier space makes things vastly easier. A discrete Fourier transform was once a painful and expensive computational operation, but for modern hardware it is trivial, and with generalised FFT algorithms that can manage arbitrary sized images, it has become quite routine. There is a lot of devil in the details, and naive application of FFTs yields less than perfect results. Creating the FFT of the convolution is mathematically easy to describe, but again there is enough devil in the details that the results are never perfect.

So, this is how I do it. You need to get something like PhotoShop or GIMP. You carefully use the “magic wand” or freestyle select tool to select your wife. Then, use unsharpen or deblur or whatever filter can give the appearance of “sharp focus”. Then, while she is still selected in the image, reverse the selection so that the remainder of the image is now selected. Blur that a little at a time until you are happy with the results.

This will trick the “minds eye” into thinking that she is in focus and the background is not and your brain takes care of the rest.

All of these are good solutions, but to simplify things: a combination of sharpening to force edges on things, then blurring to reduce the artificial edging, is about the only way to save a blurry photo. I don’t think you need to get into Fourier analysis, however much that technique might underlie the tool function. :slight_smile:

Sometimes one pass will do it, other times a series of different types of sharpening followed by blur will be better. All you can do is master the various sharp/blur tools so you understand exactly what their effects are, then apply them globally or in selected regions. The Undo key will be your best friend.

The Kalman filter would be great here, actually. The downside is that a Kalman filter is a ‘learning’ filter. It trains on multiple instances of noise.

For images, you’d need to have exactly the same kind of blur across multiple photos and preferably some kind of ‘training’ set of data to let the filter learn how to remove the noise.

With just a single photo, the Kalman filter is basically the same as a Weiner filter.

Even with multiple photos to deblur, unless the blurring can be mathematically categorized as the same statistically, a Kalman filter isn’t going to help.

I have no idea if there’s an explicit transform option but almost certainly Fourier transforms and wavelet transforms are used under the hood.

Fourier, in particular, is used to speed up many computations, as the Fast Fourier Transform (FFT) is perhaps the greatest computational development of the last century.

Everybody’s phone certainly has at least one, and probably more, DSP chips inside, which perform FFTs all the time.

Can I hire you?*

*Seriously, PM me if you can.

Tools like edge sharpeners and blurring don’t tend to be applied in Fourier space. Here the actual convolution is directly applied to the image. In their simplest form these apply a kernel - a matrix that describes the contribution of each pixel in the region around the target pixel - to the new value of the pixel. Different kernels can blur, sharpen or do other interesting effects.

What the focus fixing deconvolutions do is much more sophisticated. They start with the a-priori knowledge that the blur to be sharpened is due to an out of focus lens (for instance). This is important, as the nature of the blur is very different depending upon the reason. It may all look like ordinary blur to you, but it isn’t. The original information can still be largely there, and recoverable.

Depending upon the exact design of the lens, the out of focus blur is different. The lens applies a convolution to the image. In image terms, the value of every pixel is spread into every other pixel by a function that depends upon the distance the object for that pixel is from the lens, the lens design (focal length, aberration characteristics, f-number in use, vignetting parameters, and so on) and even on the design of the sensor. But if it is possible to capture all that information you can exactly understand how the blur was created. If you can do that you can mathematically create an inverse function - the deconvolution. (One critical number is not easily recoverable - we don’t always know the various distances from the lens the various objects being photographed were, so these need to be estimated)

It isn’t perfect for a host of reasons. septimus gave a good overview of some issues. Noise is the big problem. Noise can be due to thermal/quantum noise in the sensors, un-modelled issues in the camera (like light scattering), and critically - quantisation noise due to the limited bit depth of the image. Quantisation of the pixel values means that you really have lost information, and information that can’t be estimated or otherwise recovered, and thus the deconvolution will be less than perfect. Worse, the deconvolutions are not all that numerically stable, and noise can (and will) induce significant artefacts in the final result. So much so that you need to limit the bandwidth to keep things under control. The shaper the final result you want, the worse these artefacts tend to become, with ringing around objects, and other objectionable issues.

The really neat de-blur systems try to estimate the lens parameters, and to estimate the focus parameters in use when the photo was taken. Similarly they will try to estimate motion blur parameters - all of the above applies to motion blur in much the same manner.

The bottom line is that the specialised systems that perform out of focus recovery are actually restoring real information that has been spread around the image, but are ilmited due to real world constraints. Simple sharpen tools in photo-editors just apply a naive edge sharpen, and do not recover any information. They are more an artistic mechanism that a recovery one.

Is it possible to use multiple overlapping photos to restore sharpness in software? I thought that this was something you could do with frames of movie film like 8mm. You have multiple stills of the same but slightly different image. I thought there was software that could use the redundancy somehow (no idea how exactly - magic?) to restore resolution and sharpness to get a few good stills from multiple 8mm frames. Let me know if I need to lay off the modeling glue.

I think this is how you can eliminate noise if you have multiple images of the same subject. The subject is the same in all images, but the noise is different between images, so you can average the noise out.

Also, if the object (or camera) is in motion, you can increase the effective resolution by sampling across multiple frames, because the imposition of the image on the grid of pixels is different.

The deconvolution process seems slightly analogous to this ‘de-mixing’ experiment - what appears a hopelessly mixed up mess actually contains most of the original information, as long as you know enough about the process that mixed it up to be able to apply the exact reverse.