The irony here is you’re engaging in art. The message you’re trying to make can’t be made directly so you’re looking for a roundabout way of making it. You’re requesting an act of subordination from us to engage with your argument because it would be unsuccessful without the subordination.
You are assuming that I’m engaging in art. But what if I had asked ChatGPT to write those responses? In that case, they wouldn’t be art. Since AI can’t create art you must know beforehand whether something involved AI in its creation before you can know if it is art or not. Perpare to spend the rest of your life being Schrodingered.
I can quite obviously read the intent because AI would never be so terrible as to, sui generis, generate such a banal intent that’s so obviously driven from a human insecurity.
In a million years, if you had prompted ChatGPT to simple “respond to this post”, it would not have generated the response you produced.
You obviously care a great deal about the definition of art since you can’t stop making personal attacks in this thread. But I’ve already conceded: AI creations are something other than “art”. But, I could not care less about the distinction.
No, not if they take the approach the programmers of the amazing AlphaZero chess computer has taken. Let me explain. Traditionally, chess programs were created with large opening books and strong chess principals to which they would adhere. This is like teaching an A.I. a particular type of musical style and having it mimic and spin off of that. That crimps creativity and A.I. individuality. It has too.
What the creators of AlphaZero did, instead, was program the board layout and the geometric moves of the pieces and … nothing else. It had the computer play both sides so that it would look for answers to ALL of the moves on BOTH sides. At first, it started like a blundering novice but, because its ability to calculate is so incredibly fast, it developed at a prodigious rate and has become one of the very top chess computers in the world. I’d like to see programmers take this approach with A.I. musical composition.
Granted, chess is science and good music is subjective. How would an A.I. know what we consider “good” or “bad” music. Well, it could find out via input categorizing and rating compositions.
I’m just throwing this out there because the present method invites immitation and artistic incursion.
In exactly the same way an artist I can prove that their work was copied and contributed a similarly tiny amount to the final work that was produced. Just like the theft analogy you might not be able to show exactly which bit of million dollars that ended up in the hackers account was your 0.005c, but you can absolutely prove it was stolen.
The generated art only comes from the training data, so if your art was used as training data then you can absolutely prove that your art was copied and the end result was a derivative of your work. If copying two works of art results in a derivative work then so does copying 15000, just like if stealing 5000 twice dollars is theft then so is stealing 0.005c a million times. It’s nonsensical to suggest otherwise
If they paid for the CDs or streams or whatever source, that’s no different then me buying the entire Beatles collection and listening to them hundreds of times until I realize their method of work and try it myself. I don’t have to pay them anymore than the price of the CDs/streams.
I don’t think it’s true to say that it’s no different. It’s obviously different, because it’s not a human listening and learning - it’s a machine that, whilst it has some similarity/analogy to the learning processes of a human mind, is not a human mind.
It is a different thing that is not exactly like other things that have come before it, therefore it may not be correct to just treat it the same way as we have treated other things.
No, not really. You could perhaps prove that your art is in the model. You wouldn’t be able to prove that it was used in that specific work, much less where.
But you COULD literally show the transaction where that money was skimmed off into another account. That’s where your analogy falls apart. There is an actual financial papertrail showing 0.005¢ leaving account A and going to account B. No such thing exists for your art scenario. Also, physical theft of goods is a different crime with different rules than intellectual property infringement so it’s moot anyway. In the case of intellectual infringement, you need to show how and where the infringement happened. Which was the case of the dismissal (for that argument) – you’d need to actually show WHAT was copied, not just handwave “It’s in there somewhere!”
Not really. If I use an image gen model to make me a Belle Epoque sketch of a puppy in a hat, then you’re going to have a hard time proving that your comic superhero art is part of that image, and where, just because it’s in the overall model training data.
We’re just going to disagree there.
We still don’t know how the human mind works. So what sense does it make to insist that whatever AI does it HAS to be a different thing. We don’t really know.
If we don’t know how it works, how can you be arguing that it’s the same? Same as what?
It seems highly unlikely that the two things would be the same by accident.
To be clear, whilst arguments of similarity can be useful - X is like Y, so we should treat X the same as Y, they have to be tempered with recognition of the ways in which X and Y are different - not only in their intrinsic properties, but in the effect they have.
For example: A digital camera is quite similar to a human eye, but taking a photograph of a copyright work is not the same as just looking at it, because a camera is different in that it creates a persistent and more exact or faithful copy of the photographed work than does the combination of eye and brain (that’s an intrinsic difference) but also because it makes it easy to reproduce a copy of the work (that’s an effect).
“AI copies. It does not create.” is another one of those incorrect, valueless one-liners. Sufficiently-advanced copying is exactly what creating is. Take Shakespeare, for instance: Every single one of his stories was copied. He just told them better. And telling existing stories, better, was enough to cement his place as the greatest writer in the English language.
Now, of course, no AI is yet a match for Shakespeare, but that’s hardly a surprise, since no human is a match for him, either. But the current AIs are now better at writing than most humans.
All of which is proof that it’s not, in fact, art. His previous works and his artistic statement and so on might be art, but something that can’t convey anything on its own has failed to be art.
I could not disagree more strongly with this statement. Decoration can exist on its own. Art almost always exists in a context, or even in multiple contexts, that of the artist and of the viewer, which may overlap but are not the same.
Guernica may be a something to look at for someone with no knowledge of its context. But it’s place as one of Picasso’s greatest works requires context and knowledge.
So are objects like ancient ibex atlatls, where almost all context is absent formerly art that is now just decoration since all of the important contextual information is irretrievably lost? Or maybe it was always just decoration (or “content”) since apparently they were mass produced? Would the discovery of the dried, shriveled remains of a fruit affixed to a wall if discovered in Pompeii be considered ancient art?
It once was art. Probably. The degree that it still is art depends on our ability to minimally at least imagine a context.
Let’s use another: ancient so labeled fertility sculptures, that seem like they were of size intended to be caressed more than seen. We call them art because we have made up a function for them. If we were sure they were something exclusively to be caressed while masturbating, early porn, would we still call them art? They are pretty.
I do not think current AI has a meaning within a context it is trying get across. It can create reasonable and new facsimiles of what artists create and we viewers can imbue meaning on it from our context. But without any artist intent I think it stays outside my definition.
On the other hand when Vanilla Ice sampled Under Pressure and used it in his song without getting permission plenty of people were screaming, specifically the lawyers for Queen’s record company who sued his arse and made him pay a huge pile cash.
Neither situation is an exact analogy of training AI from artists songs, but I’d say the Vanilla Ice situation is much closer than than Todd Rundgren’s
Isn’t the artist intent in the prompting? I have a vision for what I want to see and convey that to the AI, get an image in return, refine my prompt, etc until I arrive at something that hopefully reflects what my intent was. Sure, it might be “Joe Biden on a Unicorn” or “Jesus Made out of Prawns” but it could also be an image I intend to be meaningful and convey some sort of artistic message.
If I think that a flower vase full of bullets has some artistic message I want to share, does it matter if I prompt for an AI to render that image versus filling a vase and taking a photo versus drawing it on a sketchpad? Any of those could potentially allow others to experience my message.
Damn, I WISH they were using Biden