My post-retirement job involves a lot of graphic design (web design, social media, product marketing, etc.). I’ve used the entire Adobe suite of apps since they first hit the market, as well as other high end graphic and video apps. I know them like the back of my hand. I too was concerned with the coming of AI, but I now welcome it and use it to my advantage.
As one example, instead of searching for and buying stock photos that are close to my vision for a particular campaign, I can create it exactly with AI (along with much tweaking). If anything, AI has allowed me to be more creative, not less. And, my understanding is that those who provide Adobe’s stock photography that Firefly uses get fairly compensated by Adobe (which they can certainly afford—their subscriptions ain’t cheap).
Thanks Tibby, that’s an interesting and encouraging perspective.
It does seem like, in these various AI threads we’ve had recently, for every poster who raises an alarm over the rise of AI, there’s another who is much more optimistic; saying “no, AI won’t put us all out of work, it will create whole new industries or at least, industry niches”. Though I have to admit, my most optimistic imagining of new AI-related jobs up to now was of dull, creativity-free, repetitive typing of instructions into prompt bars so that AI can do the real, interesting work. You paint (or tell AI to paint, hehe) a much rosier picture.
It is a weird, transitional time in industries that are being affected by AI right now. Though I’m a web dev, I work with several graphic designers, and recently our manager passed along a message from his superiors to the designers that they shouldn’t use the new AI features that are now built into Adobe products. Or at least, if they did make use of AI assistance, keep it on the down-low. I think the fear is that if our clients get wind that we’re using AI to create their stuff, they’ll decide that they can just do the same thing in-house themselves.
also, “manipulating through visuals” is as old as “visuals” …
there is a reason Trump has an orange fake tan face and Biden looks like the skin on his forehead might tear any moment …
they try to trick (manipulate) you into thinking that they are 20 years younger than they actually are and active, outdoorsy lumberjacks… this reckons the power of (traditional) image manipulation.
and dont get me started on those filters in IG, etc… where everybody ends up with a face like the Kardashians … Or postcards with incredible blue sky, etc…
so that (desire to manipulate) does come baked into any new technology …
to be honest, I am way more worried about deep-fake vids than a hidden “send nudes” (I also read no-nudes) in the puppies, as the video may bypass the rational hemisphere and goes straigt to your reptile-brain (and subsecuent reactions, that may come from the same part of the brain).
Just look at the gaza-hospital explosion news debacle
I suppose it helps if you’re severely shortsighted like I am; without my glasses I see “SEND NUDES” immediately. I have no problem with the “S”, but I do admit the “E” could use with standing out a little more.
This tool is based off Stable Diffusion 1.5, which was released a year ago. (Precisely a year ago yesterday, actually.) Much more recent AIs are much better with hands (but not perfect).
Reason I’ve seen is that hands are made up of lots of small moving parts with tons of different positions. Which means that sample photos of hands are inconsistent and that there’s a lot of possibilities for messing it up when rendering.
If you were to make a model of just hands holding/using silverware, it would be fine. And, as noted, more recent AI models are much better at it than the nightmarish early results.
As an aside, hands are one of the hardest things to learn to get correct for most people learning life drawing as well.
And what the AIs screw up most often is the number of fingers. After two or three, accurate counts of things is a next-level task for neural nets. And for most things, if there were supposed to be five but someone instead put in six, we’d still consider it fairly accurate. But counts of fingers are both high enough to need actual counting, and have a definite, correct value.
AI image generators have a systematic problem with repeating elements, because they don’t try to count, they generate low-level bits of image first and then extrapolate to what fits next to them. If the thing you’re most likely to see next to a single element is another repeating element, you hit the bananana problem.
And as @Chronos said, fingers are numerous enough to hit this and few enough that the errors are obvious. If you want another example, ask an AI to draw a sailing ship.
Oh yeah, I didn’t notice the weird hands in some of the pics at first.
Not at all. When I took figure drawing classes in college on the way to getting a BFA, hands were the most difficult part of the body to draw. At least for me, it was.
Of course, as a human, my particular problems with getting hands right didn’t include getting the number of fingers wrong. Pretty much always 5 or fewer per hand.
In general, current gen Art AI is much better at hands. If the prompt is hand-specific (“Young man waving at camera”) it does even better since it’s ‘concentrating’ on it. If the prompt/render has nothing directly to do with hands, it’s more likely to flub it especially in an otherwise complicated render.
For true complication, ask for people “holding hands”. Running results on Bing, Midjourney and SDXL still gives results ranging from unconvincing to looking like two squids battling over an apple.