Writers Strike - AI demands unlikely to succeed

And history may prove you right. Or you may be Cliff Stoll.

…I mean, I’ve also got a computer and a keyboard, that obviously wasn’t free, but is also in the hands of the robber barons looking to enrich themselves. But thats kinda beside the point.

…I like my odds.

I don’t have the same certainty that you have regarding the limits of AI’s creative capability. Given how far it has come in a short time I think it is a very risky bet to take.

Then my “self correcting problem” statement stands.

The writers aren’t the only ones in the media that have their jobs in jeopardy.

Many of these bean counters have their heads on the chopping block as well. Many executives and managers, pretty much anyone with a Masters of Business Administration will be completely superfluous and let go.

That’s not even getting into various crew, gaffers, camera operators, and such. Then there is post production, where AI can also be used to leverage the editing process, increasing efficiency and reducing jobs.

And that’s assuming that there are even actors to be filmed. AI generated images are improving to the point where soon you will be able to feed it a script and it will pop out a finished scene.

Is the WGA going to go to bat for each of those positions as they are threatened by AI? How about outside the industry, as virtually any job that uses a computer will be replaced by that computer, will they stand up against those job losses?

I also can’t take seriously the argument that, since someone played around a bit with ChatGPT3.5, they know the fundamental limits of AI and know that it can never match a human’s capability, completely ignoring every other failed prediction that computers could never match or best humans at something. AI models are improving at an astonishing rate, and something that is purpose made is going to be better than a general chatbot. Using ChatGPT 3.5 as a baseline is like saying that humans can’t do something because you tested a 5 year old on it, and they couldn’t.

And the argument that this give more power to the studios is also dubious at best, as the demand is that more power be given to the studios. If laws are passed that protect intellectual property from being used as training data, then the only ones able to use AI will be those who own a bunch of IP. Disney would love the gift that no one else is allowed to use any of the IP they own to train an AI. I just don’t understand why those who complain about big greedy corporations are so eager to give it to them.

I look forward to the day when I can tell my computer, “Show me a movie.” And it will know my preferences and mood, know how much time I have to watch, and create one for me on the fly. It will monitor my reactions, and if it finds that I’m not responding well to a character or plotline, it will adjust those to meet my preferences. That’s the democratization of media.

And absent something catastrophic happening in the world or to my health, I fully expect to live to see that day.

And just to come back to this point, AI is a very different beast from those other two.

Ultimately there was nothing tangible propping them up, nothing different from any other financial investment bubble of the past.

AI is different. Ultimately it produces something that entertains, informs or assists. The value of that product will either aid profitability or it won’t.

Yeah, I don’t understand what point was trying to be made there.

It’s like doubting horticulture because of the the Dutch Tulip Mania bubble.

I don’t agree that those are sufficient characteristics to confer personhood. I’ve had dogs that exhibited both, and that didn’t mean they were legally or ethically people.

I think you’re being a bit cavalier here. You’re advocating a restriction on the traditional boundaries of fair use, and I think it’s valid to worry that new restrictions might be overbroad, particularly when it’s not entirely clear what criteria are being used to distinguish between acceptable public use and unacceptable. I think there’s a very good chance that attempting to regulate in this area can lead to limits on existing free speech protections if the regulations aren’t carefully drawn. And I think that risk goes up when you start using established concepts in ways that are completely novel, like using “plagiarism” to describe the creation of the data sets used to train AIs. A long standing exception to copyright has been “transformative” use, and it’s hard to imagine a more transformative process than reading ~8 million books to develop a statistical model of how often the word “I” is followed by the word “was.” If that’s insufficiently transformative, what other work can possibly clear that bar? It’s hard to see how you could legislate here without severe knock-on effects on traditionally protected speech.

Yes–but they do legally and ethically have protections, whether you call them “rights” or something else. AIs do not have a self that needs any sort of protection. They’re bookshelves and typewriters, not dogs and horses.

I’ve tried to be really clear that I’m not saying it’s plagiarism. I’m saying that it’s roughly analogous to plagiarism; it’s a brand new thing that’s not covered by our existing conceptualization of intellectual property. I don’t think I’m being cavalier at all. On the contrary, folks who are blithely assuming our existing legal protections for intellectual property will suffice are being cavalier.

Current IP law focuses on the product. I think that’s going to become insufficient, for the reasons you describe. Instead, I think we’ll need IP law that focuses on process. When I create something, and you incorporate that into your tool, you’re using what I create in a way I may not have intended. We’ll need separate rights for using something for human consumption, and acquiring something for incorporation into a tool like an AI.

This is not even the real problem, as you are certainly going to see improvements in the AI to address this rather soonish. The real problem is that it CAN’T do the job at all without copying from real people. AI is not creating anything.

Okay, sorry, I missed that you were using plagiarism as an analogy.

That said, I’m not really sure what the problem is here with regards to intellectual property. I understand the problem from a labor perspective - speaking as someone who has no office skills that couldn’t be easily replaced by a chat bot, I’m very keenly aware of the issue there. And @Cervaise has explained how this specifically effects writers really well. And the whole issue with academic cheating is huge, and is going to require massive changes in how we educate.

But, “I didn’t give you permission to show my publicly available text to your AI,” just doesn’t register as a problem to me. If the goal is to hobble this entire field of AI research, well, I’m not necessarily opposed, but it seems there’s better ways to do that than through IP law.

The basic point is that AI folks are profiting off of others’ intellectual labor in ways unanticipated. They should pay for the new use of that material.

I have a friend who’s spent decades building his artistic skills as a fantasy artist. He’s made connections, assembled portfolios, done trade shows, and now he makes a living illustrating D&D books and MTG cards and the like.

Folks with AI tools are taking his art (among others) and using what they got from his online portfolios to create derivative works to sell. He’s at risk of losing his income to people who are profiting off his work. They’re building their tools based on his labor, but not paying him for his labor.

I’m just saying, pay people for their work. Don’t use it in novel ways without finding novel ways to pay them for it.

Incidentally, “showing” stuff to an AI is still not the right way to describe it. It’s “incorporating” stuff into the tool. It’s really key to differentiate between a sentient being who learns naturally, and a tool that incorporates work. The latter is both more accurate, and clarifies why payment is appropriate.

I’m curious. Folks who think we don’t need new protections, please help me understand.

Let’s say you write a book, and publish it, and sell it for $15 a copy. I make an exact duplicate of it, credit you with writing it, and sell it for $10.

The law currently forbids my actions. Do you believe the law is, in this case, wise? If so, why? How is the world better for forbidding me from making the book available at a lower price?

That would be plagiarism. Which I thought you said you were not talking about. That’s what an exact duplicate is.

Let’s say you write a book and sell it for $15. I make a Cliff Notes version of your book, and sell it for $25.

Do laws need to be changed so that I cannot profit off your work? Is the world a better place if I cannot?

It’s not actually plagiarism, as it credits the original–but that’s beside the point. I’m not asking what it’s called. I’m asking why you think the law is wise, if it is. Do you think the law is wise? If so, why?

Let’s not say that. Let’s answer the question I asked.

I think that should be regulated to the point of prohibition.

However, If I buy that book, read that book and create another work in the style of your book then I should be free to sell that under my name and pay you no further money.

I don’t think I’m obliged to pay you anything more than the original price I paid for your book.

In that case “the playing field is level.” Whose AI production gets purchased? Yours, or the one produced by the studio itself? I’ll give you one guess.

The only way to get ahead is to be able to differentiate product. If you can’t, the house wins every time.

Yeah, okay, I see your point. The problem’s a lot more obvious once you brought up visual AI, where it’s actually directly replacing the original creators, than with chat AIs - ChatGPT isn’t going to give me the entire text of Harry Potter, so the original writer’s career isn’t really impacted, but using an AI to generate an illustration can completely cut the original creatives out of their job.

But unless the illustration is an exact replica there is no “original creative”, only people that have inspired the illustration produced. I struggle to see the moral or ethical difference between a work, “in the style of” created by a human or by AI.
There is a practical and logistical difference for certain but is that strong enough grounds on which to base a clear legislative definition? It seems like a debate that will enrich the legal profession but few others.

Why? The crucial part of the question is, why do you think such regulations are wise? In what way do they make the world better?