I'm missing something about AI training

I’m not trying to compare it to other things and say “They both use clever maths, prove me wrong!”. That’s a silly argument of your own making. I’ve already pointed out several times what makes it non-derivative as far as the law is concerned: If you can’t point to what from A is in B and where, it’s not derivative. Saying “But prove me wrong” a bunch of times doesn’t change that. You can certainly create derivative works with it (Parody Jaws poster, prompting for the Mona Lisa, etc) but it’s not innately “derivative” in a legal sense just because it came from AI.

If you want to say it’s derivative in your own mind, separate from the legal argument you were originally making, sure who cares. Tell yourself that it’s derivative.

All art is derivative.

For me, art is least derivative when a new medium opens up. The classic examples are how photography opened up the visual arts and synthesizers opened up live music. AI art is doing the same thing now as it opens up the production of art to those who have ideas but not execution skills.

All art is based on other art, not all art is derivative, legally speaking. A derivative work has a precise legal meaning, that means its so closely based on another piece of art that the original artist has some ownership over it. E.g. me writing a song that sounds a bit like the Beatles because I like the Beatles is not a derivative work. Me making a techno remix of Penny Lane is

Right. And, if you were the rights holder, the onus is on you to prove to a judge how and where your music was used in the new track. Talking about clever maths and demanding that people explain how it’s different than your ever-escalating example of the Mixtronic 9000 doesn’t cut it. Nor does “Well, it’s in there somewhere”. Point to exactly where the copying happened or you don’t have a case.

Maybe I’m a Philistine, but that’s not why I go to a restaurant, fine dining or otherwise. I go to eat some food, maybe have a couple drinks, and experience a pleasant ambiance with people I care about. All of which I’m sure an AI could help provide.

But, let’s do it your way. Could AI create a fine dining experience as you describe it, where the goal is to communicate some kind of message?

I decided to prompt ChatGPT to do exactly that. (I had a guess as to the topic it would choose, being environmentalism/sustainability, but I didn’t prompt it that way. It just feels natural for a food course communicating a message to focus on the impact we have on the planet by eating things.

I won’t copy-paste the whole thing, but it came up with a very artsy-farstsy meal where the first course is very “Paleo”, the second uses basic agricultural products, the third is super processed, the fourth is almost entirely artificial, and the fifth is Paleo again, served on a broken ceramic plate.

An AI cannot be petty, but it can write from the perspective of somebody who is feeling pettiness.

This is how bot farms work. The FBI article on the topic revealed that there are programs that create things called “souls”. A soul is a fictional persona, with tons and tons of information about it included: where it is from, historical personal information, race, age, etc. the software keeps track of these souls and feeds them into a large language model that then writes posts from the perspective of all of these different people to post onto social media accounts with fake pictures, location information, etc to line up with the soul.

The same large language model will quite happily write totally different arguments on the same issue if it is being told to write from two different perspectives.

A far more sophisticated and advanced version could be used to generate an entire media ecosystem of fictional celebrities, track the relationships between them, and have them all carry out and elaborate performance of living through celebrity life. Such an elaborate complex a fictional artists could very much create a diss track over petty disagreements that were procedurally generated for two of the fictional characters.

Obviously a large language model cannot feel pettiness or jealousy or any other emotion. But, if sufficiently well trained on human writings that Express the aforementioned emotions, an AI could certainly deliver a facimile of a person who is experiencing those emotions.

When I was in college, taking a philosophy 101 class and learning about the concept of a philosophical zombie for the first time, the idea seemed ridiculous. Sure it was a neat thought experiment, but I couldn’t conceive of an entity that responds in the same way that a human would was out the underlying emotions and thoughts that guide humans to respond in the way that they do. Large language models prove that the philosophical zombie concept is not in fact useless. A sufficiently complex large language model could communicate in such a way that even an expert in linguistics or AI would struggle to determine that they are not speaking was a real person. Such an AI maybe capable of arriving at conclusions that appear to be reasoned. It may be capable of solving mathematical problems correctly. and yet it possesses no internal experience nor underlying understanding of the subject matter. That’s incredible, and it challenges many of the concepts that we thought were the domain of humans (or at least, sentient beings) alone.

No, I didn’t. The original tracks aren’t stored anywhere in the AI’s programming. The only things that are stored are the rules that the AI uses to understand and generate images. Those rules were built by looking at pictures, but so was the understanding of any human artist.

The AI could not start with an existing image if it wanted to. Outside of training, it literally cannot access any of the training images.

This is incorrect, and shows you still have an incomplete understanding of how the AI works.

The AI looks at tons and tons of 1s and 0s (and other data, like metadata) to figure out patterns that correspond to “cat”, or “dog”, or “eating”, or “beach”, or “sunset”. It does not store any of the original data.

If you then tell it to generate a picture of a cat eating a dog on the beach at sunset, it starts with random noise, not an image of any of those things (because after training it cannot access any images at all). It then modifies the random noise until it scores highly on the model’s internal understanding of what “a cat eating a dog at sunset” actually is. As it happens, the model’s internal understanding matches a human’s expectations for the same phrase, even though the process it uses to arrive there is very different than a human illustrator.

I think we’re keying in on a big point. “Closely based” is not the same as analyzing 10,000 blues songs and using that data to write a new song that listeners would recognize as “a blues song”.

That’s far more like

If you want to write an original blues song, you need to know what blues songs are, whether you’re a musician or a human designing an AI. Developing the background information of “what is a blues song” doesn’t make your song derivative, it makes it based on other art, like all art.

I was going to comment similarly. I had no idea what “fine dining” was other than a restaurant with expensive food. It’s not something I’d ever seek out. But that’s a hijack, maybe worthy of another thread.

That’s fine, nobody is asking you to. As I’ve stated from the very start of the thread, AI can produce content and if you’re looking for content, then nobody is disputing AI can replace humans.

Yes, AI can produce simulacra of things but that doesn’t matter because the simulacra is not the thing. Shooting a person in a video game doesn’t actually kill a person. Having a AI bot tell you they love you doesn’t actually give you love. Having a mega mansion in a VR game does not actually affect your material circumstances in the real world.

My point is that you cannot analyze the molecules put on a plate to determine whether something is fine dining or not in the same way you can’t analyze the bitstream of Not Like Us and determine whether it’s grammy worthy or not. What makes it artistically significant is how it sits in context with other art and that context involves real, embodied, flesh and blood humans as a necessary part of the art itself.

People keep on trying to find ways to make AI do that but all it can do is produce a simulacra.

Put another way, AI can never produce art, not because of any technological deficiencies in the AI, but because of our limitations as human beings. And AI can never replicate the deficiencies of human beings because it’s intrinsically capable of things we’re not (cheap & perfect copying, lack of mortality, imperfect communication). It’s not about Kendrick being petty, it’s that he can’t stop being petty, even if he wanted to. An AI can pretend to be petty but if you ask it to be something else, it’ll be that thing instead. It can’t be limited the way humans can.

The invention of AGI will not result in AI creating art, it will result in all art ceasing to exist. Either it will be a single AGI, in which case, why does it need art, or it will be multiple AIs just passing around raw weights around to each other without needing to resort to metaphor and symbolism to communicate ideas deeper than we can via literalism.

Are fictional characters capable of being petty?

If Kendrick and Drake were two characters in a George RR Martin novel, would the song one of them wrote in universe be art, even though it wasn’t actually written by a petty guy, but by a guy writing what he imagined a petty character would write?

If a thousand years later we forgot that this song was written by fictional characters and just attributed it directly to Martin - the way we just talk about Plato’s Atlantis story, disregarding that the story was originally told by a fictional character within Plato’s other story - would that change anything?

If a man who is not really petty writes a song from the perspective of a petty person, and this is considered art, why is it any different for an AI that is not really petty but writes from the perspective of someone who is?

A fictional character can be petty to another fictional character, a fictional character can’t be petty towards you. Because the pettiness in the book is a simulacra of pettiness, that only exists in the book. If we read about a character in a book murdering 1000 people, we don’t call the cops because the act of murder was only a simulacra.

If Kendrick and Drake were two characters in a George RR Martin novel, would the song one of them wrote in universe be art

Yeah, we would not give a shit if a fictional character in a book was dating teenagers and then another fictional character called them out. George RR Martin could make it art in his book. He could just write the words “Everyone reacted super enthusiastically to the song, it was a bestseller”. It wouldn’t mean anything because we all know in fiction, you can just write a thing and it happens.

If a thousand years later we forgot that this song was written by fictional characters and just attributed it directly to Martin - the way we just talk about Plato’s Atlantis story, disregarding that the story was originally told by a fictional character within Plato’s other story - would that change anything?

Yeah, we would have made a mistake about art, as we commonly make mistakes about art. We look at Louis CK’s art differently now after it as revealed he forced others to watch him masturbate. We didn’t know that before, now we do and we look at some of the stuff we said about him at the time and said it “aged poorly”.

You mean a person who plays a character? Like, a bunch of artists do? We incorporate that into our evaluation of their art.

The AI isn’t doing anything–it’s being operated by a human. Ascribing art to AI is like ascribing art to the quill, the ink, the vellum, and the scribe who recorded Plato’s words. The tools used in the creation of art are not artists. The artist is the one with the intent to express something.

The operator of the AI is the artist, creating art via the AI.

I’ve never run into either Kendrick or Drake. For all I know, they could be simulated celebrities, whose actions and images are generated by an advanced AI, after which the media pretends that they are real.

If they were simulated, and everything you read about their feud was fictional and based on a script generated by AI, would the song cease being art?

What do you mean? If there was a well written novel with that being the premise, I imagine many people might “give a shit”. That’s what a good novel (you know, art) does - it makes you care about the characters and their experiences.

Well, presumably the novel would be better written than that, such that when you read it it conveys the emotional experience that listening to the song in those circumstances would have. Obviously, the way you feel about a work of fiction is different than the way you feel about Drake and Kendrick; and you would feel different yet again if it was something that happened to people you knew rather than to strangers whose work you are a fan of (or even strangers you’ve barely known anything about until this hit the news).

No, I mean what I described above.

Scenario one: Kendrick and Drake have beef in real life and one writes a song about the other.

Scenario two: Kendrick and Drake are two fictional characters in a novel about the rap world. The characters have beef, and at the climax of the novel one writes a song about the other. The song’s lyrics are included in the novel, and it is recorded for the HBO adaptation.

Scenario three: Kendrick and Drake are fake celebrities in an AI generated media ecosystem. A Large Language Model scripts all of their actions; AI video generators create all appearances by them, and their simulated actions are reported on by a complicit media. All of their live appearances are done by holograms. Their beef is entirely fictional, the plot generated by AI, and the song was generated by AI as well.

Is the song in Scenario 1 art? That’s the world we live in, and I think we all agree that the song is art in our reality, yes?

Is the song in Scenario 2 art? I think so. It’s part of a larger work of art, but that doesn’t invalidate it being a work of art in its own right. (in the same way that Let It Go is part of the larger work of art Frozen, but is also a work of art on its own; why yes, I do have to listen to too much of my daughter’s music…). In fact, I’d argue that both the original lyrics in the book and the song as it appears in the HBO adaptation would be considered art.

And so, that brings us to Scenario 3. What does this song, generated by AI for a fictional beef, lack that both Scenario 1 and Scenario 2 have? I would argue that the answer is, “absolutely nothing”.

The “soul” based AI describes by the FBI is pretty fire and forget. No one would mistake the political trash that AI posts on Twitter for art, but one could imagine an advanced form of this AI, say 50 years from now, simulating an entire media ecosystem of fictional celebrities, each of whom generates art based on the personality and traits associated to them.

Here’s the existing AI network Russia used to run thousands of fake accounts - each with their own history, personality, and agenda - to spread misinformation: (pdf warning)

You could argue that the person who put together the media ecosystem by deciding what sorts of characters should populate it is the artist, even if all he did was describe a hundred quirky news anchors for the AI to work with. But what if they just went to “randomcelebritygenerator.com” and copy-paste another AI generated list?

Sure, but then it’s not really a question of “who is the artist” but a more general question of “who is sapient”. Of which “have intent to express themselves through art” is but one component.

For the Russian network, the operator’s intent is both clear from a large scale view, and perhaps subtle when looking at each generated piece. I don’t see any entity with volition in the AI system itself.

You keep on harping on the text of the piece as if that has primacy over what that text points to. Like, if there was a video that came out of a guy killing another guy and we put the first guy in jail over it, then yes, if it’s later found out the video was fabricated and the second guy was still alive, we should obviously let the first guy out of jail. The video was not important except insofar as it pointed to a real thing that happened in the real world. It’s not possible to generate an AI video that we know is an AI video that actually puts a person in prison because the act of generating a video does not kill the person in real life. The cause and effect chain here is totally backwards.

If an AI went and killed a person solely in order to generate the video of the person dying, then the “AI” didn’t do anything. You would go after the person who made the AI do that because that’s the person who is imbued with intentionality. If the AI “accidentally” killed a person, then it’s the exact same as the different between a person intentionally killing someone and a person accidentally killing someone. But it’s incoherent to say that the AI can possess the intention to kill someone because AIs need to be imbued with intention, only us frail, fragile humans have intention come from de novo.

I feel like I keep on needing to hammer this point home, the artistry of Not Like Us cannot be separated from the way it impacted these two real, flesh and blood actual people and how one of them wanted the other to have a thing happen to them. You can produce all the simulacra of that you want, you can’t produce the actual thing with AI, no matter what you imagine the AI to be because it’s intrinsically imbued in what we can’t be that AIs trivially can be.

This feels too much like making assumptions about how an electronic device must necessarily function. I would not presume to predict how devices will or will not work in the future. You’d make a stronger argument if you simply made statements about the current implementations of AI.

This is simply untrue. The fact it is possible to get back a close to an exact copy of the training data, as in the example @Darren_Garrison posted, shows the training data is encoded in the AI model…

That’s simply information theory. If the information representing the 0s and 1s of the Jaws poster JPG from the training data was not encoded in the AI model it would not be possible to recreate the Jaws poster. There are no magical AI fairies at work here.

It’s not a perfect loss-less representation of the Jaws poster but the MP3 I rip from a CD is not a perfect loss-less representation either.

No, I agree, but here is my point.

Let’s say that it’s 2045 and DeviantArt, a website where artists post their work and talk to each other about it, has been totally infested by a more advanced botswarm. Each botted account consists of an “artist” with a personality, a tragic backstory, and an art style. The AI has these bots talk to each other in the forums, create art based on these conversations, share it with each other, have more conversations based on that art, and then create more art based on those conversations. It even creates some accounts of “beginners”, who it intentionally generated amateurish art for, and then it develops their style by tracking who they talk to and what art they view, generating new images accordingly.

In other words, it simulates what has been described as “art in conversation with prior art”.

Are the images that get posted to this site after the AI has totally dominated conversation for, say, 10 years considered “art”? What if, through recursive analysis of the art generated, they’ve evolved and developed into new styles that are still pleasing to humans but unlike any currently existing style of human made art (the way that AlphaGo developed totally new Go strategies)?

What if botted accounts only made up half of the traffic, and their art led to changes in the way that the human users make art, too?

And if it IS art, who is the artist? The AI? The generated persona? The person who put the whole system together?