AI Generated Art Is No Different Than Photography

True. Slightly different context but I have lost count of the number of people who have told me, and also argued with me that I am apparently using voice cloning or AI text to speech in my video content. I don’t do that.

However, this isn’t a new phenomenon. It’s just the new version of ‘it’s photoshopped, I can tell from teh pixels’. When you have a hammer, everything looks like a nail.

(I did consider cloning my own voice when got COVID and couldn’t talk for a couple of weeks this winter - I have a vast amount of recordings from previous work that I could have used to train a clone, but I couldn’t find one that was good enough and where I trusted them to let me keep the rights to it).

…what are you talking about? I didn’t claim anyone hacked into my system. And we don’t typically throw people in jail for copyright infringement and I don’t think we should start doing that now.

You are acting like the “AI computer” is human. But it is not. It’s called “Artificial Intelligence.” But it really isn’t. Its software.

And it didn’t “ask” for the photo the same way a human would. It doesn’t “see” the photo the same way a human sees it. The starting point here is not to anthropomorphize the process. There are multiple lawsuits that are happening right now all over the world that are addressing the legal arguments at play here. It isn’t as simple as “the AI looked at it and can therefore use it.”

LOL.

AI is never going to replace me. But even if it did, it just strikes me as odd that people would be cheering for an obviously inferior version of my work.

Of course you did, you said it was a heist and the AI got your picture without asking. But the AI (or the people running the AI) did ask, they queried the server that you use to host your images, and that server sent a copy of the image to the AI’s computer, because that’s why you uploaded your image to a server instead of a vault.

Fair enough. The humans running the AI software asked for your photo, then the humans put the copy of the image that your server gave to them through an analyzer they built, and then the humans stored the output of that analysis for future use. They’re not selling the copy you gave them, or creating another copy to give to other people.

They’re not doing anything illegal, or even objectively wrong, they’re analyzing something you willingly gave to them. It turns out they’re WAY better at analyzing things than anyone has been in the past, but that’s not illegal or wrong either.

That’s not to say that the result of their work will be unicorns and rainbows, but lots of action around copyright is sketchy, so that’s not anything new.

…nope.

You don’t have to take everything I say literally.

It isn’t the “getting” of the picture that is at issue here, and that isn’t the basis of most of the lawsuits.

It’s the usage.

It’s a very important distinction to make. Uploading your image to the internet doesn’t give everyone permission to use that image how they like. Copyright applies. The right to publicity applies. The Berne Convention applies. We ware still a number of years away on seeing how this all plays out in the legal system. But it isn’t as simple as you make it out to be.

“Illegal” is something we will have to wait and see, both as a result of the multiple lawsuits, as well as any changes to legislation that may take place. And that it isn’t “even objectively wrong” is, of course, your opinion, so there is nothing objective about your opinion at all.

And I didn’t willingly give it to them to be used in this way. In fact I don’t even know if I’ve given it to some of these AI models at all. The very first step here needs to be transparency. If they’ve used my work in a dataset then people should be allowed to know that, and should be allowed to easily have their work removed.

I have no doubt that AI is going to be really hard on production people. True artists will make art with whatever tools are available. But there are huge industries of workaday artists and designers - storyboard artists, illustrators for manuals, second-unit people getting B-roll and establishing shots, advertisement directors and crew, and other similar art-adjacent visual developers who are going to lose a lot of jobs.

But this has been going on for a long time. VFX have taken the jobs of many stuntmen. Quadcopters have devastated the movie aviation business (helicopter shots, etc). Digital movies wiped out the film production and distribution business. Cameras did huge damage to the commercial illustration industry. And the automated loom made the life’s work of skilled weavers irelevant.

So we can acknowledge that any time an industry is disrupted there will be winners and losers. The artists may lose out on commercial work, but the businesses paying for it will be better off.

When I started my business, it was on a shoestring. I had no money for graphic designers, lawyers, accountants, etc. So I knuckled down and learned as much of it as I needed, and did my own 4-color layups of ads, photography, etc. I learned how to do my own incorporation and other legal work from self-help books.

But it wasn’t easy, and people with less education might not be able to do it at all. And it took time away from the stuff I was good at and which the business was for.

There are many such roadblocks to starting a small business, and if you hire all the people needed to do it for you, you need a lot of capital which freezes out the poor from business creation. To the extent that AIs can now do this work, it’s a great boon to poorer people who want to start a business and control their own destiny.

I’ve seen that. The supposed photos of the grandma with the huge crochet cats that made the rounds on social media last summer. For every “you know this is a fake AI image” comment there were 100 “oh wow that’s so amazing she’s so talented” comments. Same for those “amazing places” photos, like the home on the waterfall, or the eco resort hotel in the rain forest. People are surprisingly credulous.

People are constantly, constantly being fooled into thinking AI images are real. Even ones with (what should be) glaring errors. You don’t even have to try hard.

The big winners will be the irrational skeptics - forever. It’s fake, it’s fake, it’s fake, it’s fake, it’s fake. Moon landing, round earth, global warming, you name it.

This is a nightmare. We don’t need to sow more division. Yet we surely will. On the credibility front we’re headed back to pre Stone Age.

Ehh, I dunno about having to be insane to be acting randomly. There’s a lot of instances where I simply cannot tell you why I did what I did (if I’m even aware I did it), and an outside observer would probably be even more befuddled. Sometimes my subconscious gets control of the muscles and it goes where it will.

Seriously, I’ve shouted “PLUS, I DON’T GIVE A FUCK WHAT YOU THINK!” at a stranger when I thought I was just thinking it to myself. It was caught on a phone line I’d accidentally left open, any I was later questioned by the person on the other end of the line about whether I’d gotten into a fight. When I told a friend about that happening, he said “I think you do that pretty often, actually. I’ve seen you blurt out some pretty atrocious remarks that you don’t seem to be aware you’re voicing.”

And then there’s a whole nother set of instances where I’m sleepwalking and what I’m doing makes absolutely no sense at all. The PG story about me sleepwalking is me walking around under the carport at 3 AM in my underwear, with the occasional squeaking of the storm door alerting my wife that’s something was amiss. I’ve done far stranger stuff while on my nocturnal journeys, none of it I’ve heard of is distinguishable from randomness in my mind.

So, I’m open to the idea that I might be insane, or even neurologically atypical. Who knows? But there are times that I’ve done things that I wasn’t even aware I’d done and it seems pretty random in frequency and in action. Considering that, I’d prefer to think our behavior generally lies on a continuum between random and deterministic.

I have some thoughts about your post but a free will debate really doesn’t belong in this thread, start a new one if you want. I’m sorry I even tangentially touched on it.

Coming back to this, the concern isn’t so much that it will stagnate, it’s that it will wander away from “the real”.

Example: I went to see the Natural History Museum’s Wildlife Photographer of the Year exhibition when it came to town. Amazingly enough, these are really good photos.

But of course, it’s hard and expensive to get a high quality photo of a real live wild animal. And it’s cheap and easy to ask an AI generator to produce a highly convincing high quality fake. So as we move through the next generations of AI generators, if they are being trained on images scraped online and if those images are dominated by pretend photos of pretend animals doing pretend things (albeit as a marvellously complex distillation of real photos) then the risk is that in the future the images of bison and fieldmice and snow leopards etc. become less and less grounded in reality because of the process of replication, mutation and selection - without the anchor of real photos of real animals they will “drift” - imperceptibly at first - away from that. And at some point - when most people’s mental image of a snow leopard comes from nth generation AI fakes - that is going to be… bad? Like, at some point authenticity is no longer a relevant concept. Will this matter? Is it ok that the relationship between the photos labelled “snow leopard chasing a young ibex” and what actual snow leopards do has become rather distant so long as we like the fictitious photos?

The answer to this, obviously, is to continue to make it worth people’s while to invest in the kit and the training and time and sheer aggravation of sitting in a cave in the Hindu Kush for 5 months staring at a hillside until one day you get that perfect shot. But it doesn’t look like this is the way things are going.

This is also on me and I also regret it.

I mean, there’s another element of this debate which is: “we’re making it increasingly easy to produce highly convincing photos of things that never happened, is this good.”

Like any tool, it can be used for good or bad. It depends entirely on the user. Making and distributing fake campaign photos of Donald Trump posing with happy Black people? That’s bad.

Making a “never stop learning” photo of an elderly woman learning to be a tattoo artist by practicing on her cat? That’s good.

But the idea of photos and videos being able to easily and convincingly be faked has been used in science fiction for decades. So for me the only surprising thing about any of this is that it is coming now instead of in some nebulous future setting. Utterly expected otherwise.

Well, yeah. And some tools (hammers) we generally have free for anyone to use, some (knives) we put some restrictions on, some (agrochemicals) we regulate very strictly indeed and some (nuclear weapons) we restrict to sovereign governments only.

This tech is moving so fast that it seems to have pretty much escaped any regulation whatever. Again: is that good though?

Speaking personally, I will happily sacrifice a million whimsical gags about old women and cats in return for it being harder for wannabe dictators to spread disinformation, but that’s just me and I quite properly don’t get to make the rules. But, you know, I do kind of wonder if there should be any, or if maybe it’s just too difficult.

Myself, I’m not a big fan of governments policing ideas.

But you do think there should be laws against fraud, yes? I think it’s worth asking if letting an unregulated fraud-and-fun-granny/cat-photos machine loose on the world is entirely in line with our general “fraud is bad” policy stance.

And in any case, the response to a tool that makes it easy to produce convincing fake photos doesn’t have to be a governmental one. One can imagine various economic and societal responses to a major decline in the trustworthiness of what are apparently photos. What happens when we no longer trust product photos on Amazon? Do we have some sort of non-government consumer verification body or bodies, a la securities rating agencies? Is it just caveat emptor, and if so what happens when the emptors cease to empt so freely because they have so many caveats? Similarly, what if we no longer trust e.g. videos of police brutality? Or police body-cams? Or if insurance companies no longer trust dashcam videos?

There is going to be some sort of response to this incredibly powerful and widespread technology. I am as big a fan of inaction as a decision-making tool as the next person, but you know, maybe some intentionality wouldn’t be a bad thing.

And I think you are underestimating the difficulty of regulating AI. You mentioned nuclear weapons–building the first nuclear weapons was such a monumental task that only the most powerful nations in the world could do it, and even some of those failed. Now even empoverished postage stamps like North Korea can build them. Generative AI is similar–the first powerful ones took massive amounts of compute time to create and thus cost massive amounts of money. But after that people are finding ways to make them smaller and more cheaply but still as good as or better than the older models. And lots of them are entirely free to download, customize, and run locally. You might can force US government regulation on Stability AI or Meta, but you are not going to be able to regulate someone running an off-line generative AI in their basement in Boston or their attic in Azerbaijan or their garage in Guadalajara. It simply isn’t going to work, any more than any “war on drugs” has ever worked. That toothpaste is not going to go back into that tube. You are debating about locking the barn door but the horses are already running around the next three counties.

Generative AI is a chaotic neutral tool and no regulations are going to be able to force it to be lawful good. It is a disruptive technology, and like disruptive technologies before it, it is gonna disrupt.

No, I recognise that it will be incredibly difficult, and will only get more so. Like I say, maybe it is too difficult. But the internet in general is pretty difficult to regulate; nevertheless there are regulations that encompass distributino of undesirable information online, and they do work to some extent.

But again, you’re focusing on regulation as opposed to other adaptations to the reality of a rational and severe decline in trust of any kind of imagery.

The dangers are very real. The granny/cat pictures aren’t exactly neutral because they tilt people’s perception towards fake things being real. That’s minor but still negative. Get politics involved and the ability to manipulate people skyrockets.

Also, in barely a year we’ve seen AI generated videos progress from nightmare fuel abominations to actually quite convincing. That’s where the real subterfuge is going to happen.

Those all look fake as shit to me, but it’s getting close and coming soon.