Deconvolving diffraction spikes in Webb images

I’ve wondered about this generally for decades, and especially recently as Webb came online and we see photos with those six diffraction spikes and the two smaller ones. There is information in those spikes. They provide higher resolution position and intensity information for brighter stars. Wouldn’t some kind of iterative deconvolution to process those spikes into more intense star points give an image closer to truth?

It’s not just that the spikes are in the way, though I guess they are a little bit. It’s that they are obviously very sharply resolved indicators of the stars that cause them. They’re not just a minor nuisance, they’re valuable in their own right.

Computationally, this is doable, right?

It’s easy to write equations for deconvolution, but in practice, there are lots and lots of ways for data to be imperfect, and even very small imperfections can spell disaster for attempted deconvolution. For instance, the photon count in each pixel of an image is always an integer, which means that you have limited resolution in determining intensity: For a deconvolution algorithm, the difference between an intensity corresponding to 10.1 photons per pixel and one corresponding to 10.4 photons per pixel might be relevant.

I don’t know if the following is a useful way of framing my question, but I’ll give it a try:

Suppose this deconvolution is implemented like many effects in photo editing software are implemented, and there’s a slider giving a single degree of freedom ranging from zero strength to obviously too much strength. My question then can be, what position of the slider yields images closest to what a vastly superior telescope would see? By “vastly superior” I mean no supports and no mirror segmentation, so no diffraction spikes, and also much bigger aperture, zero optical imperfections, zero aberrations.

There’s a similar issue with the photo processing effect usually called “sharpness”, which adds a brightness step function to places in the image with steep brightness gradients. In one sense, the “sharpness” slider is letting the user further degrade the image by shifting the data further from their raw state (to take advantage of a quirk of the human visual system regarding perceived sharpness), but in another sense, given that the optical system that generated the image in the first place has known blur, some small application of the “sharpness” slider ought to get closer to the truth.

Can we state these things in terms of the setting that comes closest to truth? And is it necessarily zero?

The thing is, we don’t know what the actual truth is. Effectively, the data in the area covered by those spikes got partially munged in the spikes. We’d be introducing human-created artifacts into the data no matter what settings we chose for the correction. That’s bad science.

For the most part, JWST researchers are not interested in those bright stars causing the spikes. They’re all nearby objects and we have different telescopes much better suited for studying them. It’s straightforward to orient JWST so the spikes do not occlude the actual science targets in each image.

Couldn’t you say the same thing about the dark image subtraction and bad pixel erasure that they must be doing?
But I take your point. Certainly, messing with pixel data sets a pretty high bar for justification.

Yes, this by itself may be reason enough not to do what I’m thinking about.

There is a lot of interesting stuff to think about.

The sharpening kernels applied in photo editors don’t really sharpen an image in any sense that it removes blur brought on by optical imperfections. They are local in effect, just tweaking the contrast across edges. Optical imperfections spread the light out across the entire field, and to reverse them, in principal, you need the entire field to engage in the convolution.

If you had a perfect model of the optical system, you could, in principle, calculate the precise manner in which the diffraction spikes were created. But reversing this isn’t the same thing.
One, we have no phase information in the recorded image.
Next, we don’t have perfect spatial information in the image. Bright stars tend to leak into adjacent pixels.
The dynamic range of pixels is much less than the dynamic range of the field.
The information is quantised spatially (into pixels), in values by the digital conversion, and by the nature of photons.
There is a fundamental noise floor, something that astronomers tend to bump against.

So it is all lossy.

An approach can be to iteratively estimate the contribution made by the diffraction spikes, subtract them and refine. Each step we model the effect of the spider arms and hexagonal mirrors, of what we hope is an improving estimate of the actual field. Something akin to a maximum entropy technique might work.

The difference between diffraction artefacts and dark field or bad pixel elimination is that the latter are local defects. They don’t spread into adjacent pixels. They do however make deconvolving the diffraction even harder.
The effect of diffraction in the optical chain affects every pixel. You need a convolution that takes every pixel and affects every pixel to work out what has happened.

The effects of quantisation of the image are enough to mean that you can only go so far. You really want to know how much light was diffracted into every pixel originating in every object in the field of view. But the quantisation of the data means you can’t have it. Most of that information is below the quantisation limits, with probably no photons at all received to give you an estimate. So the deconvolution is going to suffer from quantisation artefacts. And clipping artefacts. These will often turn up as ringing in the spatial reconstruction.

The usual starting point is to take a 2D FFT of your image, and similarly take frequency space versions of your diffraction pattern, and work in that space. This is both computationally much nicer, but is also a natural space to work in. Thinking about trying to ameliorate optical issues in the Cartesian space isn’t a good fit, and you end up thinking in terms of local fixes (like sharpening edges) rather than the big picture.