Photoshop Content-Aware Fill--Spooky

There’s a new feature coming out with the next version, and it seems to be a form of sorcery.

If you retain any belief that any image you see depicts an actual moment that occurred in reality, you can give up on that for good.

Holy shit.

。。。。Holy crap on a crap.

Want. So bad.

Want…in a video application (Photoshop’s video support is a sick joke). Put this in a tool like Combustion and I’m a happy happy boy.

When I heard CS5 was looming, I said, nah, I’m totally happy with CS4.

Then I saw a demo of Content-Aware.

I gotsta get me some CS5!

Well, that is a reason to upgrade. There are so many applications where this will save hours of fiddling around.

OMFG, the enormous time I’ve spent, doing this type of thing manually. I can’t wait.

:eek:

Even though I think it’s astonishingly amazing and useful, it will take a lot of the fun out of it if absolutely anybody can do those mysterious tricks now.

I must say, this is damned pretty exciting. I was pretty happy with the auto-align/auto-blend and advanced cloning tools (heal brush) we’ve seen in the last couple iterations of Photoshop, but this seems to build significantly on those. Can’t wait to try them out and see how they work in real-life situations.

Anyone, that is, who can shell out the $$$.

Wow. Amazing. Think I may need to eat some of my advertising budget to get my hands on that…

Christ on a bike, how the hell does that work?

Wonder if it’s the same kinda principle as the clone tool but randomised according to the background.

WANT.

I was actually not so impressed with the one where the road was cut out of the desert scene. The program clearly duplicated the bushes from the area to the right of the patch, so the retouching job was glaringly obvious. (You’ll notice the demo cut away from that one really quickly.)

Makes me wish I had a job in the graphics field. I only use PShop for Fark contests, so I don’t know if I could justify to myself the $$$.

Some of that functionality has been present since Photoshop 7.0 with the heal brush. Have you ever tried using it? It’s like a smart version of the clone tool. If you use it without a source (area to clone from), it looks at the surrounding pixels and basically tries to figure out what you’re trying to erase or cover up (usually dust spots and things of that nature), and smoothly fills in the pixels for you. If you use it with a source (in other words, like a clone stamp) it sort of makes some sort of calculations based on the source material, the target area, and smoothing out the relationship between the two.

It’s a very useful tool, and it looks like this content-aware fill is building on that idea. I’m always a little bit wary, as obviously they’re not going to show a video where content-awareness really misses the mark (and anybody who’s used the heal brush knows it misses almost as often as it hits), so I don’t know how often it’ll work or at least be a decent starting point, and how often it’ll totally futz up the image.

That’s what they said when Photoshop 1.0 came out. And they probably said it when the first camera was invented.

And I’m not necessarily implying that I disagree.

But there was no re-touching job. This tool is not meant to totally automate the process, just to speed it up – and holy crap, does it speed it up.

The 45 minutes of patch-and-match that you’d have to do to get to the point that it was ready to finesse with pattern breaking is just gone. “Oh, let me get that for you.”

One of the most fiddly things about filling in the blanks in a situation like that is matching gradient lighting. You can reproduce the perspective by copying from adjacent areas, but the lighting will be jarringly off. In order to fit it in, you have to fiddle and fiddle with the brightness (crude,) contrast (fine,) and sometimes even the hue, (if you have multiple light sources) to come up with something that will match the way your brain knows the light should be attentuated in that area.

This feature automatically patches in textures in such a way that the perspective makes sense and adjusts the lighting on the fly – the most time-consuming grunt-work of this sort of an edit. This is hugely impressive.

Of course after the algorithm is run there’s still work to be done. It’s not meant to do the work of a human being, just to lighten the load.

I liked the guy that said he could do it just as fast and better using the clone tool. I am sure he can do it better, since they were only doing a first pass over the program. But getting to that point just as fast as the program did? I have my doubts.

Also, what may look good and impressive as a small image on a monitor may be quite a different animal when it’s printed (and enlarged) in high-rez.