Tell me about compressed sensing (Wired article inspired)

Last month’s issue of Wired had a short article about “compressed sensing”, and it raised a lot more questions than it answered. I was looking at this, and imagining that it could be used to up-res SD footage to HD, low res scans to high res, etc.

Is this the real deal, or another Wired over-sell (see: Push media)? Any Dopers work in this area?

Nobody?

Perhaps I didn’t choose an interesting enough subject line - this stuff is fascinating and so counter-intuitive that it would appear to violate laws of physics. Seriously! Look at this example. This has a very high “what the FUCK?” factor to it.

I can see applications for this in radar, sonar, lidar. Am I letting my imagination run wild here, or is this something that is going to be getting these folks a Nobel?

Link to resource collection at Rice

It’s kinda neat, but it’s not violating the laws of physics or anything.

If someone gave you a blurry, pixilated image of a car, you could look at the blurry pixels and say yeah, this is really supposed to be a straight line across here and you could draw the car with nice clean edges and color it in the same color as the original image and get a very detailed image of a car. Mathematically, this is doing the same sort of thing.

I don’t know how useful this is for something like an MRI though. Taking the car example again, if your car had a small scratch in it that got lost in the pixillation, as you draw a nice straight line where the car fender should be you aren’t going to mysteriously recover the small scratch from the pixillation and draw it in too. You’ll draw a nice straight line without the scratch.

I personally think that this sort of filling in the gaps type of algorithm, while it does make for a more detailed image, isn’t necessarily the sort of thing you’d want for an MRI. It’s great for trying to recover a photograph, but I think it could lead to a misleading and erroneous diagnosis if used for a medical scan.

ETA: Forgot to say, using it to up-res SD footage to HD (as mentioned in the OP) is something that this would do fairly well. It does seem like it would be useful, I just don’t like the way they kept mentioning MRIs in the article.

Reading some stuff in the second link, it does seem to throw Nyquist under the bus though.

Yeah, it seemed like the writer focused on a “sexy” application, not necessarily the most relevant or useful one.

Look at this article on “single pixel cameras” from IEEE Signal Processing Magazine, and the “CS In a Nutshell” sidebar on the 3rd page. Be interesting to hear your take.

Nyquist limits are in some sense the worst case scenario. If all that you know is that you have a signal, you have to sample at the Nyquist frequency in order to be able to reconstruct it. On the other hand, if you have some information about the structure of that signal, you can do better. That’s what the deal with compressed sensing is: it’s making an assumption that your signal is of a certain form. If that assumption is met, it works very well. If not, it sucks.

These are not perfect reconstructions though, only best estimates. The Nyquist rate guarantees perfect reconstruction from a sampled signal, whereas it doesn’t say anything about how close you can get with a lower sample rate. They are using some tricks to cut out the extra frequency information and only keeping around the content which gives a reasonable reconstruction of the original material. Their method has the side effect of producing a “clean” final image, and can give the impression of perfect upcaling or retrieving data where there is none in the source. It’s really just a neat sampling/reconstruction algorithm, I wouldn’t think Nyquist is in any jeopardy.

I’d love to play with this, but all I’ve been able to find is code that I’m not able to use.

So it’ll get an image up to a realistic level of detail, but not the correct details.

Exactly. Just look at the sample image that gaffa posted.

I can think of millions of different original images, that have the same pixel values in the sampled points, that this technique would not be able to reconstruct. That example is extremely contrived.

It is a standard test image, according to the article. The Wired article had an example with a photograph of President Obama, though it might have been a bullshit example, rather than a genuine one.

Interesting article. Maybe soon we really will be able to ‘Enhance!’ stuff like they do on TV shows.

With ref to the OP, it’s fair to say that the Wired article is simplified and perhaps very slightly over-optimistic. This is because it’s not a factual guide to cutting edge technology and its potential applications. It’s a magazine on sale in the high street = a device to deliver you to advertisers. It’s not intended to achieve any other purpose. So the more ‘Gee whizz!’ stuff they can build into an article, the better.

I’m sure it was a standard test image - but it was one with specific properties that made it reconstructable through their technique.

The Obama image was definitely a bullshit example.

Here is a page that has some actual images reconstructed using this technique.

http://dsp.rice.edu/cscamera

How different is this algorithm to existing interpolation algorithms used in scaling SD to HD video?

How many of those images would actually arise in practice?

Wired is in the business of hype and bullshit, but people who know what’s what are really excited about compressed sensing because it works much better than older techniques for certain problems. Check out this page on using compressed sensing for facial recognition, which is a very hard problem that hadn’t seen any big progress in a long time until these guys came along. The writeup in Communications of the ACM, which is a big academic publication, is really good.

The image they used is probably the most unlikely to appear in practice out of all possible ones with the same pixel values in the sampling points. How many times do real images involve solely overlapping, uniformly shaded perfect ellipses?