Writers Strike - AI demands unlikely to succeed

The same can be said of pretty much any other quantum leap. Things that an average human could never achieve previously by themselves, tasks that previously involved the employment of scores of people are now a mundame matter.

That is the tough challenge for those that seek to protect such jobs. I’m not confident that protectionist policies are the way to go. Hobbling the motor-car in order to protect the farrier’s trade was never going to work.

That’s not analogous, because the motor-car didn’t steal iron from the farrier.

On the other hand, railroads were built by mass appropriation of land that belonged to other people. They were a quantum leap forward, and plenty of people were displaced from their property and their livelihood by that leap.

We can do better.

Edit: Remember also that quantum leaps forward often require new laws and regulations to deal with the new problems they create. As far as I know, prior to the motor car, driver’s licenses weren’t a thing. Were they an attempt at hobbling innovation, or were they recognizing the consequences of the new technology and making reasonable regulations around it?

I’ve no problem with exploring reasonable regulations, if you can make it work and everyone’s happy then go for it, but the lessons of history seem to suggest that trying to severely limit the new technology from usurping the old technology is highly unlikely to work in the long term.

Obviously not everyone’s going to be happy. There’s a potential transfer of power here, from the individual artists to the large corporations, and the corporations will fight tooth and nail for minimal protections for individual artists. I’m suggesting that we set things up to ensure that artists can enjoy the fruits of their labor, instead of having the profits from their labor go to the large corporations.

That would be me, and I am making that argument because we are in a thread about the writers’ strike, and writers’ compensation is a salient contributor to their labor action. Writers cannot go on strike because their employers require them to produce bad writing, or because their names are on bad movies or TV shows. Hollywood has made garbage entertainment product for over a century, and quality is not a basis for the union to object. Being properly paid for contributing to that garbage entertainment (hopefully making it less garbagey in the process), however, is.

That being said, completely aside from the strike, you are entirely correct about the horrifying direction of the entertainment industry, steered by soulless jackals pursuing financial growth to the exclusion of all other considerations, including — or perhaps especially — human concerns. The Zaslavs and Chapeks of the world would be deliriously happy if they could fully dispose of the creative element in their industry, starting with writers but hardly ending with them. There’s a reason the networks all leaped onto the reality-show trend, and it’s not just ratings: the idea that you can invent a high-concept premise, recruit a bunch of crazy volunteers, and point cameras at them to magically generate a show is an executive’s dream. Unfortunately for them, they discovered that wasn’t enough; the “reality” needs to be shaped and guided in a writing-ish process along the way. Not every reality show can be Cops, which literally needs nothing more than a camera crew, a cooperative police department, and an editor. But they will keep trying.

A ways up the thread, somebody asked, why don’t the executives just write the framing drafts themselves; and, again, believe me, if they could do that, they would. Even aside from the union rules that currently obstruct this, they actually cannot do this. They simply are not wired for this kind of creative work. That’s one of the fundamental tensions in the industry — the executives know how to yell at people and count money, and literally everything else about the art form is a mystery to them, which drives them crazy. They have no idea why one script is better than another, or why this director is consistently more successful than the others, or how to leverage an actor’s strengths. It’s all a black box to them. There are a couple of examples of when the executives truly and fully took the wheel, and they are quite instructive. My favorite for its educational value is the American Idol movie From Justin to Kelly, which I find to be one of the purest examples of Undiluted Executive Brain available. There is not one molecule of organic or spontaneous human artistic creation in that movie; it’s full-strength unadulterated studio hubris, an immaculate illustration of how the suits think a movie should function. And it failed, and everyone knew it, so they didn’t try it again.

So, yes, you are totally correct that we should all be terrified of what the entertainment landscape might look like in the near future, if the brain-dead bean-counters get their way. I mean, yes, there are some who are apparently looking forward to the experience of being unable to distinguish machine regurgitation from human artistic endeavor, but the rest of us should definitely be on guard. We must become smart consumers and cultural agitators, favoring the preservation of creativity and the aggressive, emphatic marginalization of “entertainment” which is nothing but robotic recycling of tropes and nostalgia.

Still, again — you can’t build a strike action around that. Money, yes. But vigilance about originality and quality? That’s on all of us.

…the answer to this is quite a simple one. Those " countless other examples of labor-saving devices throughout history" can at the very least do as good a job as a human could do, but often would do the job even better.

This isn’t the case here.

I haven’t seen a single example of an AI written script that is at least of sitcom length (between 20 and 25 pages) that is even half-way filmable. That understands story structure, that has a proper beginning, middle and end. I haven’t seen an AI written script that understand subtlety or nuance, that can do character development, that can consistently write something funny, that can do romance, that can do anything outside of very broad strokes.

The very first thing ChatGPT did when I gave it a logline prompt was to steal the title and all of the character names from Breaking Bad. It literally called the script “Breaking Bad”. And this isn’t isolated.

There is a new AI writing tool that was launched yesterday. On the front page of the website it says this:

https://ucff939249d082acb91f70526a82.previews.dropboxusercontent.com/p/thumb/AB7pe_6yO2dFZyZYOw7tCgB0Ag3mqdfX5xs1C7XlvgIVe0i0OJ1tQKJD_oRJy24oViuH-kBi2Bb89CGOfa_7ZsjRLG2MXXQz4T2EauhcZfSIZDjOsVMwWyHJ_jGATSbOq_DRKvw19rQPU8dALuJvYp7_Q048DEVEnjnOIv15hDm6tylZyZrBtYPJB00buyFltJDo_CzZZrHpPVmX1mByW9rHDUJUiyFRNKnH1RQG6BJAd3BhvMHX5D6vtGPWzSL_VIJiQWLDcLo5Yf8onguILpJuYiS9c0SGnzYxqwPry0I_SCZPd1PXFXG24amLIgaQqcIIlpKuKwy1dr2H2GA1Bn8ev6bJQW4q6rp6NskcVJdc7DIOMZjXXBpyU3r9W-wXhtMy1MHqYa7yMhWNOc2A1K6a/p.png

The disclaimer (don’t do this, plagiarism is against our terms of service) doesn’t cut it. Because we don’t know what is happening “under the hood.” Which means its probably easier to accidentally plagiarize than what all the pro AI advocates want us to believe. Unlike ChatGPT, Sudowrite is dedicated creative writing software. It was built for this. But it’s plain to see that there are no safeguards against plagiarism, accidental or otherwise. Its so obvious they had to be upfront about it on the front page.

It doesn’t work.

AI can’t write scripts. Not good enough to be filmable without having to be almost completely rewritten by a human writer.

And an AI can’t do everything else a human writer does on a show. It can’t break a story down outside of the broadest strokes. It can’t maintain continuity. (check out my “The Wire” f&ck script on the other thread) It can’t rewrite scripts on the fly on set when an actor sprains an ankle. It can’t get a “gut feeling” that a character death might be the wrong thing for a story. It doesn’t have an imagination. It doesn’t have life experiences.

It simply doesn’t work. And there are zero signs that it will suddenly start working any time soon. That makes it fundamentally different to the “labour-saving devices” you allude too.

…the WGA position on writers room minimums isn’t one that is predicated on “compensation.” Its one that is based on the premise that an understaffed writers room cannot do the job. Writers need to break the story, then they need to write the story, and they can’t do that properly if there aren’t enough writers in the room.

This isn’t a “quality” thing. This is a “do the basics of the job” thing. Writers write. They tell stories. Its what they are paid to do. Smaller writers rooms compromise on the ability to do their job. So will AI.

And compensation for “rewriting” isn’t even on the table here. It isn’t up for discussion. The entire point of the AI provisions in the WGA negotiations is that they don’t want to get to that point. They want to “head it off at the pass.”

To get to the point where compensation for AI rewriters gets tabled, the WGA would have to yield on their AI position. And I just don’t see that happening. Not with what is at stake. Especially considering AI scriptwriting simply doesn’t work.

(Bolding mine)

Well, no, that isn’t on all of us. How can I, a random pleb living on the other side of the planet, be vigilant about this?

This is something that the WGA can both advocate for and fight against. Which is what they are doing. If you want film and television show to be interesting and original and tell stories we relate to then it is “on us” to support them. But they are in a better position than anyone else to be “vigilant”.

…shall we recap?

I think that its clear that your summary was not an accurate representation of Left_Hand_of_Dorkness’s position. LHOD position, summarised is that:

  1. Writer don’t intend their works to be used to “train” AI tools
  2. There is nothing in law (that they are aware of) to prevent this
  3. The law could be better
  4. If we don’t address this: then already wealthy corporations and individuals will profit of the labour of creatives without credit or compensation

That position is not accurately summed up as “change is scary and might disrupt the status quo.” This isn’t what was “basically” said. This isn’t even on the same planet. You are projecting.

Let’s be clear, you’re talking about a law to prevent the studying of things. To prevent carefully analyzing a thing that is purposefully made available to the public, in order to understand how and why it works. With the goal of keeping how the thing works a secret, so that the people who do this thing today don’t have automated competition.

Terrible idea, and it won’t work anyway.

Exactly. To call that “plagiarism” and “theft” is the silly twisting of facts into pretzels.

…it wouldn’t be a law that would “prevent the studying of things.”

I’ve linked to an AI creative writing tool that admits on its front page that it will plagiarize on demand. And it will do that not because the AI had “studied the works of Harry Potter.” But because it can regurgitate it when asked to.

As I said in the other thread: its about usage. Copyright laws are invariably about how intellectual property can be used. If AI tools were only being used to “study” things, then that isn’t necessarily an issue.

But that isn’t all they are doing. In fact, we don’t even really know exactly what it is doing. But we know that it can be “forced” into presenting plagiarized works. Which means there are no guard-rails in place, and accidental plagiarism is incredibly likely.

The AI tool is being used to create new work, and that new work is entirely dependent on the creative works of others. Take away the dataset and you’ve got nothing. AI is worthless. (Its worthless now, but thats another point entirely) That dataset has value. And the people that created that value deserve to be compensated.

https://ucff939249d082acb91f70526a82.previews.dropboxusercontent.com/p/thumb/AB7pe_6yO2dFZyZYOw7tCgB0Ag3mqdfX5xs1C7XlvgIVe0i0OJ1tQKJD_oRJy24oViuH-kBi2Bb89CGOfa_7ZsjRLG2MXXQz4T2EauhcZfSIZDjOsVMwWyHJ_jGATSbOq_DRKvw19rQPU8dALuJvYp7_Q048DEVEnjnOIv15hDm6tylZyZrBtYPJB00buyFltJDo_CzZZrHpPVmX1mByW9rHDUJUiyFRNKnH1RQG6BJAd3BhvMHX5D6vtGPWzSL_VIJiQWLDcLo5Yf8onguILpJuYiS9c0SGnzYxqwPry0I_SCZPd1PXFXG24amLIgaQqcIIlpKuKwy1dr2H2GA1Bn8ev6bJQW4q6rp6NskcVJdc7DIOMZjXXBpyU3r9W-wXhtMy1MHqYa7yMhWNOc2A1K6a/p.png

That is called “overfitting” and is a flaw that model creators attempt to avoid.

…its called plagiarism. It says it, right there, on the Sudowrite home page. This isn’t something that “model creators should attempt to avoid.” It’s something that needs to be avoided, full stop. If the AI writer is incapable of not accidentally (or otherwise) recreating creative works verbatim, then it isn’t fit for prime-time.

And you may be right. A dataset that small and specialized may never be able to avoid overfitting. Which is why LLMs use hundreds of millions or billions of samples instead of a few hundred or a few thousand.

…and that LLM that used hundreds of millions or billions of samples instead of a few hundred or a few thousand literally stole the title of the TV show Breaking Bad, along with the characters Walter White, Skyler, Jessie Pinkman and the DEA Agent Hank then wrote a plot outline about it, all from a single prompt that didn’t even mention any of their names.

Its not ready for prime-time. And I doubt it will ever be ready.

Let’s be clear: AIs can’t study things, so that’s not what I’m talking about. They’re not people. They don’t do people things. If they were people, your use of them would be slavery.

You are extremely wrong.

Cool!

I’m not aware of any standard definition of “study” that specifies only humans can do it.

“Contemplation” and “mental faculties” are characteristics of sentient entities. If AIs can contemplate, if they have mental faculties, we need to be talking about what rights they have.

But clearly that’s not the conversation to have right now, because they can’t contemplate things, nor do they have mental faculties. They’re tools, and they should be regulated as such.

There’s a shitload of equivocation that happens around AIs, where boosters respond to concerns by saying stuff like, “you’re talking about a law to prevent the studying of things,” as though we’re talking about preventing the application of mental faculties to things, as if we’re talking about limiting a well-established human activity like studying. That’s pernicious and obfuscatory. We’re actually talking about a law to regulate the use of intellectual property through new tools, similar to how we’ve often regulated new tools.

Copyright laws themselves were barely existent before the invention of the printing press, because there was no real need for them. It’s only when copying a text became (almost) trivial that the need for copyright arose.

I suggest that AI is as significant a development as the printing press, and we need new laws.