AI Generated Art Is No Different Than Photography

Yes, the rate of AI progression seems like a limited singularity happening.

I don’t mean to say that compensating artist labor is the ONLY reason that copyright laws have been changed, just that it’s historically been A reason for changing the laws.

No it can’t. That’s not how it works. It absolutely is an engine for taking a big bunch of images and deriving new images from them by crunching through a big algorithm. It is wholly dependent on the images it trains on, in a deterministic manner.

No. Artists and craftspeople may consciously decide they want to imitate or reproduce or pastiche or build on existing works, but the process by which they do this is so qualitatively different from the algorithmic determinism of AI that it doesn’t bear comparison. Human artists have a final image in mind and work towards it with constant self correction: AIs cannot visualise their final output before it exists and produce only the finished whole, in a rigourously determined manner.

Monetization is going to be an issue. Google sucks because they have no other way to make money. People won’t pay for it, so ads it is.

Artists and businesses have incentives to be on the Internet. It’s not clear how being part of an AI training set provides any value to someone not being compensated for their image or work. Typically at that point copyright is invoked, since people being able to obtain the fruits of their labor has been an issue for thousands of years. The AI is just a machine that is copying its input data.

I’m sure it’s fun and all to run a mechanical field tiller or threshing machine, just maybe do it on land you already own or have a right to work. AI is going to wind up being heavily shackled as to what it is allowed to produce. But but but we have tech. We had it with the printing press and copy machines as well.

It isn’t. If it were just that, this would be a quick debate.

It’s simplistic to say that an AI only produces a variation of what it’s already seen. Or rather, that it does this any more than a human does. After all, we have no problem with a human studying other artists, and art critics pride themselves in being able to spot the influences of other artists in new works. How is AI any different?

There is a difference. The difference is scale and democratization. AI brings sophisticated art to the masses, and that’s the real problem.

An analogy with photography: When photography was first introduced in the mid 1800’s, artists didn’t much care. It was the domain of only a very few well heeled and technically sophisticated people, and the results were marginal. In fact, artists often used lenses and even photographs to improve their art. Just like artists used Photoshop and no one really cared.

But when Kodak brought out the first mass market camera, artists panicked. Why, just anyone could now push a button and take a photo that matches the best photorealistic painters! And so it is with AI.

Photography absolutely devastated the illustration industry. Catalogs, magazines, advertisements, all drawn by hand by illustrators. Before the late 1800’s, portrait painting was a career, because anyone who wanted a portrait had to commission an artist to paint them. This made portraits expensive and rare, and a family painted portrait was a status symbol. But now ANY yokel could have a portrait. It wasn’t right.

The proper response from artists was to accept that some aspects of the art market were gone forever, and to figure out what art should look like in a world with cameras. The result was impressionism, abstract art, modernism, etc. Artists realized that realism was cheap and universsal, and to stand out they had to come up with something new.

This is already happening with AI. Instead of bemoaning the fact that it does some things as well as or better than humans, artists should look at AI as a tool and start thinking about how they can build on it to make entirely new art.

For example, there’s Brian Eno’s generative movie. He took 500 hours of footage, and they built a custom generative AI engine that cuts the footage into different movies every time it’s shown, so no two audiences ever see the same thing.

But most excitingly, very soon anyone with a vision will be able to make a movie without needing a studio and $100 million dollars. Art is being democratized again.

…I think that most artists think that all creative works needs to be celebrated and protected against AI intrusion.

Stock photography has been on life support for over a decade, and is now effectively gone.

But how exactly do you think “the type of art that elevates the soul of humanity” gets funded? Not everyone is a Warhol or Pollock. People by and large don’t pay directly for art. Commercial work pays the bills. Keeps the skills on point. Gets your name out there in the industry. And funds the type of art that elevates the soul of humanity.

The reason here though is that AI uses artists work without their permission and/or compensation. If you want to use my work to train your algorithm and make billions of dollars, the very least you can do is ask me and pay me.

Nobody is objecting to mere “technological change.” Because if you remove all the collective creative input that was used without permission or compensation from this technology…it no longer works. The algorithm isn’t the thing that powers AI. Its paintings created by people. Words written by humans. It would be completely useless without us.

The problem with this is that commercial customers don’t want to fund the type of art that elevates the soul of humanity, if it isn’t going to be part of their product.

They aren’t your patron, they’re your customer, and if they’d rather pay 2 cents on the dollar for AI generated photos instead of the “real thing”, take a hint, your customers don’t value the thing you’re selling. That isn’t their fault, and it isn’t your fault. It’s the product.

…this isn’t the problem.

You’ve ignored my point.

They are only paying 2 cents on the dollar for AI generated photos because the AI photo generator is powered by work that was used without permission or compensation.

Its a heist. They took artists work without asking or payment and created this thing, and now we are dealing with the consequences. You don’t get to have this discussion without dealing with this point. AI doesn’t work without the content it got fed on. Whether the creator considered it “art” or considered it “commercial work” doesn’t matter.

If you were a professional photographer, you probably would. You have to find a subject, style it, light it, understand the correct settings to make the photo capture correctly. Not least you have to purchase the equipment and pay for its use.

Contrast with AI, you just get prompts to take someone else’s work and say “more of this, less of that.”

If you couldn’t do this yourself, given an expensive camera, then you can’t say AI is just like photography.

Flash photography chemically degrades pigments and damages the art if enough people do it. That’s why they ask you not to use flash. They ban cameras of all types because most dummies can’t be trusted to turn off flash.

If there was an AI model that was only trained on art and photos that were either provided by the AI owners, in the public domain, or (for the majority of images in the dataset) were licensed with the knowledge and consent of the photographer or artist - say Bill Gates decided to spend 100 billion dollars hiring photographers to go out and take pictures or to license existing photos - would you still have concerns about the output of that AI?

…I don’t like to indulge in impossible hypotheticals. This isn’t a game to me. Its real life, its complicated, there are no easy answers. Its like the ultimate lesson in the TV show “The Good Place.” It’s impossible to lead the perfect life.

I’ve had to adopt AI into my workflow in places. It’s either that, or I go out of business. I acknowledge my hypocrisy here, and I’ll live with that. As I said: they took our work without asking or payment and created this thing, and now we are dealing with the consequences. And one of those consequences is that if we don’t use this thing, many creators can no longer be competitive.

So would I still have concerns if a billionaire was still in control of the output of that AI? Sure. 100%. Because you don’t get to be a billionaire by caring about people like me. But that’s a subject for an entirely different thread.

Not majority. ALL.

If you can’t copyright what comes out of the AI then what good is it? You ask the AI to do whatever, everyone else either steals directly what you did or does the same. People will want to pay for something proprietary, where they have some control over how else it is used. Trademark and copyright violating AI destroys the market on both ends.

The ordinary people who just want an image for their own use don’t give a rat’s ass about having it copyrighted. You are apparently seeing AI as a money saving tool for Big Media, but it is a creativity tool for the other billions of people, too. You want a image for your D&D character’s profile sheet? (A real-world usage case.) There is no reason whatsoever to care if it can be copyrighted.

Read the post again. You misunderstood what “majority” refers to.

Suppose that, instead of manually cropping an image, I hit the AI autocrop. There are no copyright issues, are they? Anyone these days can generate AI clip art (“a cute puppy”). Near the other extreme, you could have the entire image be substantially mechanically generated, therefore not copyrightable. At some intermediate point it will be a judgement call how much copyrightable work you did.

Re. traditional photography, the photographer has to set up the camera, lenses, plates or film, lighting, composition, as well as develop the photo — a lot of technique and art involved. Also, there are all sorts of weird and unconventional things you can do. Though, it makes me think that some interesting results may be obtained using AI by rolling up one’s sleeves and hacking the program from top to bottom, to say nothing about fiddling with the knobs already provided. However, obviously(?) most casual users will not have access to the requisite computing platforms or open-source data and models.

“Humans consciously direct their attention towards and reflect upon existing works with a chosen intention to consciously select elements of style and technique from these works to create new works with specific goals, which works they will create teleologically, working iteratively towards a desired end-goal, constantly reflecting on their progress towards that goal and adapting their work as they go”

is not remotely the same as

“AI having been “trained” on a preselected data set follows an unalterable process to deterministically produce an image based on the weights of a pre-determined algorithm.”

and claiming they are is beyond simplistic.

The difference between AI as a new tech and photography as a new tech is this: photography’s ability to create images was not based on training cameras on existing artists works. The concern isn’t “it’ll be cheap to create fake photos so no one will value photographers’ skills”; it’s “the ability to create fake photos only exists because the tech companies have without license or remuneration used photographers’ skillfully created works to train the AI to make algorithmically derivative art based on them”. If AI is such a massive value-add for society, why should the profits of that accrue only to the tech companies that trained the algorithms, and not to the photographers and artists who created the images without which the AI would be non-functional?

The only answer to that is: because the tech companies can get away with it. It’s not about “bemoaning that it does something as well or better than humans”, it’s about acknowledging that it only does so by leaching off those humans in the first place.

But OK, fine. Winners and losers. They can always invent a totally new artform. But again, what are the next generations of AI image generators being trained on? How do we avoid ending up iwth just a summary of a summary of a summary?

This is a really interesting interview. I have some quibbles about the value of a film that is unique to every individual viewer - what is the point of culture if is not shared? But what really struck me was this, because I like belabouring points:

Well, I grew up in the countryside, so I didn’t move to London until I was 21. And prior to that I lived in small places, basically. And so most of my dramatic early experiences were to do with nature, actually, or to do with art. The thing that excited me most when I was young was either going for walks by the river or listening to music or looking at paintings. Those were the touchstones for me. And I would visit the same places again and again. There’s a place called Kyson Point on the River Deben that I used to go to often. The experience of that kind of visit is that you go to the same place and of course it’s always different. It’s different every time you’re there.

And so if you keep visiting a place you become very alert to the small differences. So that became part of my idea of what having a good time consisted of, the right mixture of expectation and surprise. You don’t want total chaos every time, but you don’t want total familiarity either. So that sort of became a theme for me in the work that I subsequently did. In fact, I remember writing a long time ago when I was 19 or so, I want art to be like sitting by a river.

This is what human creativity is like. Reflection. Emotional response. A search for something that only slowly emerges and continually evolves. Does anyone want to argue that what Brian Eno describes above is basically the same as training an AI on a dataset to produce a weighted algorithm?

If you put anything on the Internet, or somebody did it in your stead, then you are part of the training set :slight_smile: They can get away with it whether or not it is legitimate fair use or patently illegal.

Eno himself may have trained an AI on a dataset to do what he wanted in order to produce that aleotoric art.