I think that’s the best stab at an equivalance that we can manage at the moment and I’m fascinated to see how both AI and the attempt to regulate it will progress.
However, I’m dubious that past regulatory examples offer very much in terms of guidance. I just don’t think we’ve been here before and it may not be so much like the leap from hand-written to printing press but more like going straight from log-rollers to a Tesla. I’m not sure we can even fully comprehend the complexities of AI let alone regulate them.
A lot of the stated deficiencies of AI in this thread are real and valid but against all statements of the form “AI can’t do X”, there is an unspoken “at the moment” and given the rate of progress I think the horizon of real, practical usage is uncertain to say the least.
I can see merit in guarding against direct plagiarism but the whole area of “in the style of” seems fraught with legislative difficulties.
Also, I don’t see a way of guarding against AI taking the ever increasing public domain library of works as raw data. As the algorithms get better there may well be an increased ability to do more with less i.e. using only that data set as a learning resource.
And how to guard against scenarios where the material is plausibly one-stage removed from the original source? e.g. A social media platform sets up a thread for people to post their own versions of movie dialogue “in the style of” with a clear entry in the terms and conditions that allows them to dump it all into a machine learning mincer.
So would I. So do all people who have the ability to read works. Children would plagiarize constantly if adults didn’t tell them not to, and adult humans would do it just as readily if other adults didn’t tell them not to, over and over and over. So, yeah, it’s not exactly shocking that someone could make an AI plagiarize by asking it to.
If all you have on your side is nitpicking vocabulary, you’ve already lost. The AI takes text, picks it apart, generates rules of usage and compiles those rules into an algorithm. Assign whatever vocabulary you want to that. Your law wants to regulate the analysis of text, the creation of language rules, and the compilation of rules into a system that can generate text.
Since when does writing a thing mean you get tell people whom you have invited to read that thing that they can’t take the text you’ve given them and study it closely? Before you get all nitpicky about vocabulary… people own ChatGPT, and have the right to study works they have legally accessed on the free market with the computer tools they developed.
Writers are going to lose jobs to AI. Just like everybody else who has lost jobs over the course of the industrial and information revolutions.
…lets be clear here: entering in a sentence from a novel into an AI is explicitly not asking an AI to plagiarize for you. All you did was enter a sentence of text. So no, the AI didn’t have to plagiarize here. The user didn’t ask them too. Its just there was nothing to prevent it from happening. And it didn’t tell the user what it did.
Despite all of the protestations in this and the other thread that the AI doesn’t plagiarize because its just “learning” or just “studying” from the dataset, both this and my other example show that this isn’t true. It might repeat, verbatim, the exact text from a novel if you input a sentence. However, it also might repeat the exact text from a novel if you do something else. With no safeguards in place, we don’t know what will happen.
Fortunately, we’ve got much more than that. And “our side” (whatever that means) isn’t manipulating language by trying to pretend that AI is alive when its not.
Its really funny that people are predicting that this thing that hasn’t even written a single filmable script (despite numerous attempts) is going to be able to replace the writers room. I have no doubt that it will take some jobs. But it will never be good enough to do what a writer does in a movie or TV production.
That won’t mean that the studios won’t replace human writers if they get the chance. But it won’t be because the AI writer would be any good. It will be because the studios have fallen for the con.
That you call this a “nitpick” indicates that you don’t so much disagree with what I’ve said as completely failed to engage with what I said. Given how much time I’ve spent explaining the danger of this sort of equivocation, I’m afraid I need to direct you, like Darren, back to my posts.
“Your side” is the one producing the not even wrong straw man that when people use the word “study” to describe a fundamental element of what machine learning does they are claiming that an AI is alive or conscious.
…“my side” is the one that thinks creatives deserve to get paid for the work they create. That remembers that television and film are things that I actually enjoy, and don’t like the prospect of watching these institutions get bankrupted because a tiny handful of rich assholes don’t understand how the creative process works.
And as for strawman, I note that you still haven’t apologised for claiming incorrectly that LHOD’s stance was “basically that change is scary and might disrupt the status quo.”
…it isn’t part of any subtext. You are projecting.
The reasons that the WGA are fighting for change have been laid out extensively in this and the other thread. If the studios get there way (and I’m not just talking about AI) then the doors will be closed for diverse writers. It means there will be no pipeline to showrunner. It means writing opportunities will disappear. It will mean that our entire media landscape will get more homogeneous. And ultimately it will bankrupt the studios.
This is not about “fear of change” itself. “Change” on its own can be fine. And the post you quoted from LHOD didn’t basically say “that change was scary.” It said:
There was no subtext there. It was just text.
Writer don’t intend their works to be used to “train” AI tools
There is nothing in law (that they are aware of) to prevent this
The law could be better
If we don’t address this: then already wealthy corporations and individuals will profit of the labour of creatives without credit or compensation
…the writers aren’t concerned that AI writers will be better than them. Everybody I went to film school with is a better writer than an AI writer would ever be. Any member of the WGA, even the absolute worst writer in the guild, would run circles around the AI writers.
The concern is that the studios don’t understand this. I’ll point to “pivot to video” is the exemplar of this. Almost every media outlet jumped on the pivot bandwagon: but it turned out the metrics were false, that people actually weren’t watching as much video as people claimed, but that didn’t matter because thousands of jobs were already gone by the time anyone figured it out.
They will jump on any new “shiny” thing, from NFT’s to crypto. AI is just the latest thing. It doesn’t matter if it works or not.
Yes, and that is part of the problem in this conversation that I am aware of–you have been focusing on AI as pertains to the WGA strike, and I have been thinking of all uses of generative AI in general. The writer’s strike is just one very small component in the disruption that will come about from cheap, easy, and easily available generative AI. Some career paths will disapper. Some new career paths will evolve. Some economic models will become no longer viable and be replaced by others. There will be winners, there will be losers. That’s how disruptive technologies work. I’m just fascinated to have the privilege of being at the point in history that I can watch it happen. It is like being there when the first hunter-gatherer thought “hey, what if I pushed some of these seeds into the ground and waited for them to grow?”.
I’m happy to be corrected on this but my impression was that DG was basing their summary more on this very early thread post.
Which, at the very least reads as being very wary of the technology and its future use. Maybe “scary” is too emotive but the general sentiment doesn’t seem to be hugely wide of the mark.
Then the problem is self-correcting, but again, you need to append the unspoken “at the moment” to your assertion.
Ultimately the brutal metric of whether something “works” or not is the financial bottom line. If real people are needed to create a product good enough to profit from then that is what will happen. If AI provides a competitive advantage in doing so then it’ll be used.
…but this thread and the other thread are about the writers strike. I don’t particularly want to get off topic.
Except it doesn’t work.
It can’t write a script yet and I doubt that it ever will. It doesn’t understand the structure. I can’t do a “beginning, middle and end.” It can’t do the basics. It can’t do romance. I dread to see what a sex scene would look like.
It can’t break a story. And I can’t emphasize enough how important breaking a story is to the process.
There are so many things that a human writer can do that an AI writer will never be able to do. And in a rational world, that would be the end of it. Disruption here will break the industry. Just like pivot-to-video broke the media when that happened. Not all disruption is good. Uber drove down incomes, removed worker protections and introduced the “gig economy.”
I’m just fascinated t being at the point in history that we are destroying the fabric of society and pretending that its great in the name of “progress.” What we are really seeing is the people at the top who have all the money are seeking new and innovative ways to get even more money, plundering resources, making things harder and more miserable for everyone else. AI should be a great thing. But it’s in the hands of the robber barons, who want to use it to enrich themselves.
…AI will never be able to have a gut feeling that killing a character is wrong. It will never get into a passionate argument with other writers in the room about whether a couple should stay together or break up. An AI writer will never be able to have a whimsical encounter in the morning that will convert to a few pages of dialogue in the script that afternoon.
So I’m not going to append “at the moment” here. I stand with what I said.
NFT’s failed. Crypto failed. AI scriptwriters will fail.