If you aren’t aware, this is essentially a feature called “inpainting” that was already in stand-alone image generators. Although having it integrated into Photoshop is going to make a lot of AI art haters heads explode.
Thanks for reminding me about this. I just download the Beta last night after reading the post. Most of the results I’m getting are fairly uncanny valley, and there’s, of course, a learning curve with the prompts to get what you want, but I’m surprised we’re already at this stage of the game. I was so happy when AI assisted subject-aware masking and then object-aware automated masking became a thing, and that was just a few years ago. Just that was a game-changer for me. And then all those recently became incorporated into LR. Huge time saver.
Now if only AI automated toning/editing would work well enough for me. I’ve played around with and trained various apps, but none of them does what I’d call an adequate job for my needs. I also probably shouldn’t cheer on its development too much, as these sorts of advancements will likely drop prices across the industry. On the other hand, I’m in my late 40s, late-mid-career, and I don’t really care all that much anymore. If AI really does eliminate a lot of what I do and I were younger, I’d probably carve out a niche as a film photographer and charge stupid sums to do it for the people who want retro. Business-wise, that’s the way to separate yourself from the competition, and it’s a marketable product.
On the other hand, I have no desire to ever shoot film again. Or, rather, very very little desire.
I look longingly at my old film cameras and lenses, but the show must go on, and the show is digital.
Yes, there is a bit of uncanny valley going on with current image AI, but after playing with it a while, refining my prompts, and re-generating new variations, it usually delivers a realistic version. Future updates will no doubt result in ever more realistic output.
The first 9’ long alligator I generated on my front lawn looked like a Wally Gator cartoon, but # 5 looked indistinguishable from reality (proper color, tone, and even correct shadows from trees). I laughed when I asked it to replace the minivan in my driveway with a sports car, and it replaced in a wreck that looked like it was hit by an 18-wheeler (oh, the carnage!). But, after a few generations, I had a shiny new McClaren 750S parked in my driveway which fooled my family ('bout time you got rid of that old Sienna, Dad!).
It’s particularly good at extending photo borders to give you the aspect ratios you desire.
In a way, I kind of hope they don’t add accurate AI image processing filters—I find it relaxing to fidget with those in Lightroom.
At my age, I’m more excited than terrified about this AI. I figure I’ll be dead before it adversely affects my income. With my [bad] luck I’ll die just before AI figures out how to extend human lifespans.
It’s amazing how often I have thoughts along these lines. And not necessarily about AI. Must be an aspect of being not far from retirement age on either side.
Yes, I’m considering freezing my head so I can come back, Futurama-style, and see just where AI takes us in 100, 500, 1000 years. I just hope my Kenmore freezer lasts that long.
I earlier mentioned the term “inpainting”. What you describe here is, of course, “outpainting”. (Google for many articles and examples.)
Indeed both inpainting and outpainting are quite remarkable. The fact that this AI is now available in the industry-standard photography app makes it even more legitimate, and a sign of things to come.
Photoshop also has neural filters (with a bit of AI) that I’ve used for a while. They are also quite good.
I especially like the depth blur filter when it comes to those. How do you handle outpainting in the beta? It’s not the same as content aware crop is it? Do you have to describe what to fill it in with?
Photoshop’s Firefly (generative AI) is much more sophisticated than its Content-Aware filter (and its neural filters). Content-Aware simply samples from adjacent pixels and fills from that. It’s ok for small areas of simple scenes, but not great (often laughable).
Firefly is in a different league. It uses AI to take into account the entire photo and makes accurate fills from that. I used it a few times already and it did an admirable job (~1 out of 3 looks great). It’s not perfect yet, but definitely workable at this stage.
You can extend the borders of a photo a great distance, and Firefly will fill the borders with realistic, color-correct renderings. For example, I extended a photo of the inside of a client’s restaurant ~1/3 on each side and the results looked pretty close to the actual restaurant (with new chairs, tables, people, front counter, etc.). It would fool the owners (though they’re not all that observant).
But, like any good tool, it’s only as good as the user. Garbage in/ garbage out. It’s only as good as your selections and prompts. It’s a learning curve that’s worthwhile for me to learn.
It’s also great at small detail work. You just make a selection of an area you want to be changed, then tell the AI what you want to change, and presto-chango—your wish is its command! Pretty cool. There are limitations. If, for example, you select an entire face, then ask Firefly to change a particular feature, it will render a completely different face (often ugly). Just be precise with your selections (and use the erase tool on masks) and you’ll be fine.
Firefly does, however, have a problem with people (better with animals though). AI in general seems to have a problem with the number of fingers humans have, and Firefly is no different. I made a selection around my daughter’s arms in a photo where she was holding a cat (we have 6 cats ) and told Firefly to replace the cat with a human baby (we were going to have fun sending the photo to her mother, saying “look what your daughter’s been up to!”).
Firefly spits out 3 versions at a time as per your prompts. Well, in the first version of “baby replacement”, the baby looked like some sort of mutant newborn Satan’s spawn who just slithered through raw sewage before jumping into my daughter’s arms and her hands had 6 fingers each (admittedly, this would have been a great photo to send to Mom…but, I digress). The second variation looked kind of like a human baby, but a very deformed one. But the 3rd version looked like my daughter was holding a real, non-monster, baby. If at first you don’t succeed…
But, more importantly, in each version, it looked like my daughter was holding “something”, even if that something is something is hideous—correct arm/hand placement, correct tone, balance, etc. Just keep regenerating, till you get what you want (it’s fast).
Here’s a good basic tutorial on Firefly, and explains how it’s better than Content Aware.
Oh, I forgot to answer your question, “do you have to describe what to fill it in with?”
Answer: if you make a selection and generate with no prompt (the extended borders for example), Firefly will fill it in with variations of what it thinks is best for the photo. You can of course prompt it to fill with whatever you want, and it will integrate that realistically, too.
Thanks. Most of that I already know, but I was curious if I was doing it correctly, as I haven’t been quite getting the results I want. I’ve been more-or-less doing it as you say, but my outpaintings tend to look fake. I need to get better with prompting. I have noticed it having the usual AI issues with fingers. My problem is that so far, most of what I have seen just looks like a 3D object placed on top of a scene – it models the lighting and color fairly well, but something about it just seems off. But, hey, it’s early times yet. Photoshop and Lightroom have just come a long way in the past two-three years. Like I said above, their masking features now have saved me many hours of time on an edit. It’s just absolutely incredible, especially the select a particular person masks. When that got integrated into Lightroom, holy cats–game changer. Now I wish that content-aware fill brush was as good as Photoshop’s, but I suppose we’ll just be getting full-on AI generative solutions once LR starts incorporating that stuff.
What do you find useful with the Neural Filters? As I said above, I do take advantage of depth blur (or just the depth map it generates) to very slightly blur the background when there was a reason I couldn’t open up my aperture (or I have second-guessed my f/stop choice). But I haven’t really played around too much with the other stuff. Most of it is fun, but a little bit silly. (Like changing the seasons in your photo, or changing people’s expressions, which can get real frightening real quick.)
Depth Blur is good (quick and easy), but this guy shows a better way. BTW, PiXimperfect is very talented and has many excellent tutorials on Adobe applications.
I just use the neural filters for fun, mainly with family snapshots. I get a kick out of making my brother bald and uglier (thankfully he doesn’t know Photoshop, so he can’t get retribution), aging my kids, beautifying my cats, and whatnot. I don’t use them on client photos. However, I will be using Firefly on client graphics, just as I’m using ChatGPT on their content—as a tool/assistant, not a replacement for me.
I’m hoping they soon incorporate advanced AI into Adobe After Effects roto brush. Roto Brush 2 is a vast improvement over RB 1, but rotoscoping and refining mattes is still time consuming and not-error free. AI could make the job easy.
Yeah, I’ve been following him for a couple years now. Dude is insane in his knowledge and technique. But, even better, his teaching style is excellent.
That’s an older version of Depth Blur he’s using in that video. I find the current one much better. It has automatic subject selection on it (which I don’t see on that one), but more I like the ability to just export the depth map as a layer and to do with it what I want.
But it’s good to know this other technique, as I’ve never noticed the tilt/shift filter in Photoshop before. I swear, I will never be able to keep track of all the features and various ways of accomplishing the same tasks in Photoshop.
Indeed there are many ways to skin a cat (…oops, my cats just gave me a dirty look). I use nearly all the Adobe products for work and pleasure. Downloading their updates is like opening a birthday gift, though as you say, it’s hard to keep up with all the features. And kudos for Adobe for embracing AI. Great company!
People having fun with Photoshop outpainting.
How is AI going to care for people with dementia? You can’t change someone’s Depends with a chat script.
Stick the “chatscript” in a robot.
https://aparc.fsi.stanford.edu/research/impact-robots-nursing-home-care-japan
OTOH, a ChatGPT might be well-suited to carry on “conversations” with the folks with significant memory loss. Computers can have far more patience than any saint.
Perhaps it can keep them happily engaged no matter how many times it has to hear the story about Uncle Al’s carbuncles, or has to explain to the patient what day it is, what their name is, and that this thing is called a “spoon”.
We aren’t anywhere near having robots that complex, are we? Those Boston Dynamic videos are faked with CGI.
No they aren’t. There are some humorous videos using CGI from Corridor Crew, but the Boston Dynamics robot videos are completely real.
The big problem with robots is expense and battery life. Unlike digital circuitry, it is not eqsy to get the cost of robots down when theymare filled with expensive motors and sensors that use expensive materials. There are knockoff ‘spot’ robots now, but they are still close to $3000 and have very limited battery life and functionality.
In the future, if robots can be mass produced on assembly lines and demand is high enough, the cost may come down more. The battery problem is not going away, though.