AI Generated Art Is No Different Than Photography

No.

It’s the uncompensated use of another person’s labor that is unfair–what you’re referring to as “steals”.

An artist makes a work of art. It’s understood that you can’t make an exact copy of it. You can be inspired by it to create your own original work, which you can then sell–but it’s understood that by doing so, you’re putting your own labor into the new work, labor that is roughly comparable to the labor the original artist put into their work.

It is that labor that needs to be protected. We value that original work, so we create copyright law (and other law) that ensures that original works will remain valuable.

Copyright law wasn’t nearly so important prior to the printing press. Making a new copy of a book was so laborious that, from my cursory reading of copyright history, there was precious little law surrounding the topic pre-printing-press. But when mass-copying of a text became easy to do, copyright law kicked into high gear.

Pre-AI, being inspired by a work to create something new was laborious, and wasn’t protected. AI makes that “inspiration” process trivially easy: now a person can use someone else’s work in an uncompensated way for inspiration (as before), and without putting any significant labor into the process create a new work.

When the printing press came out, people decided they wanted authors to continue creating new works: copies of existing ones werne’t going to suffice. Now, we need to decide if we want human artists to continue creating new works. If we do, we need to figure out how to protect their right to compensation for the work they create.

How about the gun? Nerve gas? Nuclear power?

Of those, the gun is the only one that perhaps could be argued to have changed society to the same extent as the steam engine (or as AI could do if the more optimistic/pessimistic predictions happen to be correct).

And I’d argue that the invention of firearms went a very long way towards dismantling the prior system of elitist feudalism. So it hasn’t been all bad.

AI art is the same as human art except it’s been produced by a machine. Just as human artists may study existing works of art and use techniques and concepts from other works in their own so might AI. Or it might not just like human artists. The possibility of AI producing derivative works for sale is no greater than for human artists except for the trending novelty quality of the AI produced results. Human artists could, and have, produced such works in the past and found a limited market for them, and sometimes passed them off fraudulently as original works or unknown works from an known artist. That will happen with AI also.

Commercial artists will probably lose out. This has been happening over time just as personal computers enabled graphic production at lower cost than manual methods. Photography of people is already changing as human facial features can be manipulated by computers to produce endless poses. Artificial people will replace models and actors too. In these cases AI is simply making the job easier.

Speaking as a hysterical chicken-licken panicking delusionally about the fall of civilisation, I suppose my doubtless overblown and ludicrous concerns are as follows:

AI image generators are essentially summarisers of information. In the case strictly of photography, the input is an unimaginably vast number of images conceived of (with varying degrees of deliberation) and captured (with varying degrees of skill) by human photographers which all have this much in common: they are real images of real things.

What comes out is not an image of a real thing. It is a sort of distillation, or aggregation, of various elements from various photos to produce a construct. It may be a very refined distillation indeed. It may be indistinguishable from a real photo of a real scene. It may, for a given use-case, pass the duck test just as well as a real photo. Very well. That’s valuable. It’s a real achievement and insofar as it democratises and cheapens access to culturally useful images, this is probably a good thing. Then what?

That image now becomes part of our culture. And so does the next one, and the next million, and the next trillion. (Anyone care to offer an upper estimate on how many AI images will be generated by, say, 2030? Or the ratio of AI images to real ones?). And now a significant part of our culture is based on artificial images that only exist because they are distillations of human-created real images.

Then we train the next generation of image generators. And they go to work summarising and distilling the information in their training set. Which is now a distillation of a distillation. And so on. This becomes progressively less valuable. The value of the AI image generator lies not just in how good it is at distilling and repurposing the information in its training set, but also in the breadth and depth of its training set to begin with. The more our culture becomes dominated by AI images, the less and less value there will be in a technology that contnually distills and reprocesses those images.

What we need to make AI image geneartion valuable is a steady source of worthwile cultural knowledge for it to generate. And that only comes if actual photographers (and artists more generally) are properly incentivised to keep on creating new works. Otherwise you end up in the state where the summary is all there is and the source has totally dried up.

(Incidentally, I am cribbing much of this argument from this excellent article on the political economy of AI, which I really do commend to everyone.)

Yeah, this. I want to see photos of things that actually exist. Not some derivation. That’s not a photo. And I don’t want to have to wade through infinite fakes to see real things.

So there’s going to be a market for verification of actual photos, actual information. I guess that will create jobs.

Also, just because tech can do something doesn’t mean it’s all going to be legal. I’m reasonably sure that simply duplicating someone’s image or voice and reusing it is going to be illegal. Suppose we can build androids that resemble humans well enough that you can’t easily tell the difference. I’m guessing it’s going to be illegal to build androids that actually resemble real people without their consent. No actresses aren’t going to be replaced, and you can’t get an android of your ex wife.

But further than that, suppose a company builds an android that does not resemble any specific person. This android is very popular and “works” (is owned) by the company. It’s such a popular android, why doesn’t every other company simply copy this very android. After all, “the tech lets us do it.” All of that stuff is going to fall under new laws that control exactly what you can do with the new tech. Same as you not being able to sell your own copies of the Harry Potter novels, but but but tech will let me do it. More jobs for lawyers.

Similarly, even after copyright law was in high gear, it was held for a long time that you couldn’t copyright a simple list. A bare collection of available facts was something anyone could put together and so when, for example, the publishers of those long lists we called telephone directories tried to copyright their works they were told no dice, because there were was no value-added and also because come on, who would copy out the phone book, be serious.

Then computers were invented and evolved incredibly quickly. And it turned out they could copy lists really really easily, even telephone directories. And suddenly, it turned out, while you still couldn’t claim copyright in a list, there was such a thing as database right, which protected the interests of the people who constructed large databases. Because if such a right didn’t exist, people wouldn’t make useful databases (which we still want them to do, even if we have no need for phone directories).

So yeah: when technology makes copying things vastly easier, we tend to do something to protect the creation of those things because we recognise that it’s the incentive to create the originals that brings us value.

Again, the crucial difference is the admixture of labor. When a new technology allows derivative works to be made with minimal labor, we’ve historically changed the laws to protect compensation for the original artists.

That’s an excellent example!

This all makes sense. It also implies that the people in control of the big AI models would have a strong incentive to encourage people to keep producing new art or photos and to sell them the rights to train their AI further on these new creations?

I think there are two problems with this.

The first is re-ification. AI is not yet self-aware, nor does it have a will. As such it doesn not produce works; humans produce works using AI. That’s not a quibble - when we talk about “AI replacing humans” what we really mean is “People who can write image prompts replacing people who can actually make images”.

Secondly, all AI images are by definition derivative and so the chances are 100%. That’s how AI works - it takes a bunch of images, does a bunch of mathematical processes and derives new images from the results. AI images are absolutely predicated on processing and digesting real images. They are derivative by design.

You’d think so, wouldn’t you?

But recent developments with information summarising technologies suggest that other incentives are in play. Again, to paraphrase the linked article (which is too in-depth to quote pithily), have you tried using Google lately? It was Good. There was an ongoing incentive to keep it Good. It is now Bad. It is BAd because instead of being a map to territory (here is where you can find what the information you want) it has tried to replace the territory (no need for links, here’s the information from advertisers that has a vague connection with what I think you want).

Enshittificaiton of the internet is a real thing and in large part because of this desire to get between you and the thing you want and provide you with a summary of the thing, and some adverts. Here’s the non-pithy quote, so I’m not guilty of doing the thing I’m suggesting is a problem:

Search engines began as maps but have now become monsters. They devour the territories that they are supposed to represent, relentless guiding the user toward the place where the mapmaker can maximize profits, rather than where the user really wants to go. This is one of the major drivers of what Cory Doctorow calls “enshittification.” And LLMs are in some ways a much more powerful (though as yet less reliable) generator of summarizations than are search engines. They take a huge corpus of human generated cultural information, summarize it as weighted vectors, and spit out summaries and remixes of it.

The reason why many writers and artists are upset with LLMs is not that different in kind from the unhappiness, say, that news organizations had with Google News, or that restaurants have with the Google search/Doordash Storefront chimera. LLMs can be useful. If you, as a punter, are faced by 50,000 words of text that you have to absorb, and an LLM can reduce it down (with reasonable though not perfect reliability) to 500 words, focused on whatever specific aspect of the text you are interested in, it will save you a lot of time and work. But do you really want to buy the 50,000 word book, if you can get the summary on the Internets for free or for cheap? And if you don’t, what happens to books like that in the future?

Like search engines, the summarizations that LLMs generate threaten to devour the territories they have been trained on (and both OpenAI and Google expect that they will devour traditional search too). They are increasingly substitutable for the texts and pictures that are represented. The output may be biased; it may not be able to represent some things that would come easily to a human writer, artist or photographer. But it will almost certainly be much, much cheaper. And over time, will their owners resist the temptation of tweaking them with reinforcement learning, so that their outputs skew systematically towards providing results that help promote corporate interests? The recent history of search would suggest a fairly emphatic answer. They will not.

I don’t know if teh skew of the summarised info towards corporate interests will be the problem with image generation, but straightforward bottom line “it’s cheaper if we do this” is a big enough problem. It’s not like corporations have shown a particular gift for long-term thinking in general, so why should this be any different? As @Mangetout said, there’s always an incentive to race to the bottom.

I agree.
In my opinion, these photographs are high art: 20 of the Most Famous Photographs in History - Learn the Backstory (digitalphotomentor.com)

That is certainly true, but means little. The AI arrives at a new and unique image. Call it what you will. But there is no doubt that humans who don’t have the skills to produce images can have a computer do it for them. Self-awareness isn’t really relevant, human artists copying existing images to produce hotel room art need no more self awareness than an AI as you think of it.

So is almost all human produced art. And AI can produce abstract art that isn’t derivative.

Any laws that exist would apply to AI already.

Photographs of paintings violate copyright laws because they are copies, not because they are inexpensive. What laws have changed to protect artists related particularly because of minimal labor being used to produce images and not because they were derivative works to begin with?

Again, last time this came up I found a variety of legal citations of copyright cases decided based on compensating artists for the fruit of their labors. I’m not sure how to pull that back up.

I understand that part of it. But did any of those laws have anything to do with using minimal labor to produce derivate images? The original artists should be compensated for any such usage even if it requires maximal labor.

Not exactly, but necessarily. As I said, it’s when the artist’s labor may be used by others for profit without the admixture of significant labor that the laws need to change.

I think the tech is going to result in more expansive laws about what can and can’t be done.

The “new Beatles songs created by AI” are already covered. Can’t do them, any more than you could get humans together to try and sound like the Beatles and call them the Beatles.

However, if you “train” the AI on Paul McCartney’s images and voice so you can create videos of a convincing duplicate that sounds and acts like Paul McCartney, AND you don’t use the name Paul McCartney, but Joe Smith, are you going to be able to do this? I’ll wager that you are NOT going to be able to do this. Because it’s too easy to make a copy. Whereas in the past you needed a live person to do this, whatever demand for someone that sorta looked and sounded like Paul McCartney named something else. The copying wasn’t as good. Now it might be, but there is NO WAY you are going to be able to create these videos in any actual marketplace, using the name Paul McCartney or Joe Smith or anything else. Because the only way the AI knows how to do it is to use Paul McCartney as original source material, it’s still making a copy no matter what you say. Plus then everyone else has the same tools as you anyway, so there are a million knockoff versions of “new Joe Smith (Paul McCartney wink wink) songs” and there is zero marketplace value to any of them.

McCartney will be permitted to use his own image, everyone will have that right, and if they want to run it through AI and create product that way they can. But the copy protection is going to expand where it needs to.

I don’t think this is exactly true. Big dollar corporations who owned a lot of IP lobbied for stronger protections on the IP they were using to generate profits. Even then, they really only protected direct copies, derivative works continued to be fair use, probably because they were always more likely to be the thief than the victim.

It isn’t about the amount of labor or compensating the artist, it’s about protecting their revenue stream. I expect the corporations to be pro-AI, which puts the artists in a bind.