The Open Letter to "Pause All AI Development"

AI generated art is already, in my estimation, as good as “starving artist sale”, above the couch art.

At least some of the experts cited are less than impressed with this group’s conclusions.

Among the research cited was “On the Dangers of Stochastic Parrots”, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google.

Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.

“By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she said. “Ignoring active harms right now is a privilege that some of us don’t have.”

Her co-authors Timnit Gebru and Emily M. Bender criticised the letter on Twitter, with the latter branding some of its claims “unhinged”.
[…]
Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with her work being mentioned in the letter. She last year co-authored a research paper arguing the widespread use of AI already posed serious risks.
[…]
Dori-Hacohen said it was “pretty rich” for Musk to have signed it, citing a reported rise in misinformation on Twitter following his acquisition of the platform, documented by civil society group Common Cause and others. […] “That has directly impacted my lab’s work, and that done by others studying mis- and disinformation,” Dori-Hacohen said. “We’re operating with one hand tied behind our back.”

https://www.reuters.com/technology/ai-experts-disown-musk-backed-campaign-citing-their-research-2023-03-31/

Because taste is so personal, I’m the best stuff is as good as some museum stuff. Of course, this is confounded because what two people choose as “stuff they really like” will greatly differ. If you were randomly shown a thousand different things, or more, there is a good chance that there is one you might quite like.

Of course, this is based on image, and not sincere or somewhat suspect blather about theme and a thousand other elements of art deemed important.

For instance, a few days ago I needed an image of a dirt trail through a field to use as a background image. I experimented with a few subject and style prompt elements, and I could absolutely see some of them selling as “couch art” if they were on canvas. Not museum-quality, but certainly Bob Ross-quality. Others were nearly indistinguishable from photographs. (Four examples.)

(The field was needed for the background of this character, the central part of which came from prompting for a robot Easter Bunny, but with outfilling and lots of infilling tweaks involved.)

Every year in November, I set aside a couple hundred hours to make Christmas cards for my clients. I take pictures of their dogs and put them into fun little winter scenes with Santa hats.

I do this for around 800 clients. I’ve got the process down about as far as it can go, but it still takes about 5-10 minutes per card, depending on what kind and how many dogs they have.

I am looking forward to using an AI system that will let me just say, “Take this picture of a dog and put it into a winter scene, and put a Santa hat on it.” Or even better, “Take this file full of pictures of dogs, cut the dogs out and put them into winter scenes.”

That’s a good example of a well-defined, repetitive task that’s a great application for AI.

As for this previously mentioned Reuters report …

Critics accuse the Future of Life Institute, the group behind a letter Musk co-signed, of prioritising imagined apocalyptic scenarios.

Indeed. They claim that “Advanced AI could represent a profound change in the history of life on Earth”, without bothering to define what “advanced AI” even means. Meanwhile, as I mentioned earlier, Eliezer Yudkowsky failed to sign the letter because he doesn’t think it’s apocalyptic enough, claiming that if such an AI is developed (again, without defining it) that “I expect that every single member of the human species and all biological life on Earth dies shortly thereafter”. He says such an AI may also cause “the Earth and Sun [to be] reshaped into computing elements”.

The FLI are claiming thousands of signatures to the open letter, but I notice that their website is explicitly designed to gather signatures from anyone who wants to send in their name, whether real or fake. They claim they are “vetting” the names but I wouldn’t count on the thoroughness with which they’re doing so. The infamous Oregan Petition disparaging climate science also had tens of thousands of signatories following a similar campaign, only it turned out that virtually none of them were credible climate scientists, a vast majority weren’t scientists at all, and some of the distinguished signatories bore less-than-credible names like Donald Duck and Darth Vader.

Also notable is that Elon Musk’s unhinged ideology, currently running amok on Twitter, is one of the main reasons that he distanced himself from OpenAI and is now trying to hobble it while developing a rival:

In recent months Musk has repeatedly criticized OpenAI for installing safeguards that prevent ChatGPT from producing text that might offend users. Musk, who co-founded OpenAI in 2015 but has since cut ties with the startup, suggested last year that OpenAI’s technology was an example of “training AI to be woke.” His comments imply that a rival chatbot would have fewer restrictions on divisive subjects compared to ChatGPT and a related chatbot Microsoft recently launched.
https://www.theinformation.com/articles/fighting-woke-ai-musk-recruits-team-to-develop-openai-rival

Like this one?
Bob Marley, N/A, Musician, Musician

There are a lot more signatories now, including someone that was on the Dean’s List.

Now that’s getting into the spirit of the Oregon Petition! :smile:

I was going to add Darth Vader but I can’t be arsed to set up a fake DarthVader@deathstar.com email account just for that. But Donald Duck (d.duck@disney.com) would surely add at least as much gravitas as Elon Musk’s signing!

This was simpler with genetic engineering, then we AI because it was easier to define what it was. You could simply write a law that says no human cloning, and no altering the DNA or Human embryo. With AI its much more nebulous. How are you going to define what ChatGPT does that makes it problematic? Are you going to ban all Neural networks, or those that take in and out put natural language? if the latter how are you going to define natural language? Also if you do manage to craft a legislative ban, how are you going to prevent me from running whatever programs I want on my computer in the privacy of my home.

Okay, something I’ve been wondering that I’m still not sure about that greatly affects my point of view on all this: Is there a way for AI art or writing to be completely divorced from existing work, not drawing from or “borrowing” from outside sources at all? I get the impression that the answer is no. If that’s the case, do the creators of the “data” the AI uses deserve any sort of credit or compensation? Given a large enough dataset, is it reasonable to fear AI supplanting further human creation entirely in any kind of context?

What would that even mean? Absolutely 0% of human creative output is completely divorced from existing work.

Copyright law draws a fuzzy threshold for what it means to be derivative. Ordinary lookalike work generally isn’t derivative as long as it doesn’t directly copy. We don’t necessarily have to set a different threshold for AI, but setting a different standard probably demands an argument for why.

Prohibited derivative work is identified as using specifically protected items of intellectual property such as named characters, fictional locations, novel ‘invented’ devices, proprietary language constructs, and anything that is explicitly trademarked and doesn’t fall under ‘fair use’. As you noted, all human-conceived works are in some way from prior experience, and most intellectual product that falls into a genre is almost by definition derivative of some original defining work, e.g. all stories about vampires and vampire-like creatures is largely derivative of Bram Stoker’s work, which itself was a synthesis of folklore and history of Transylvania, and while it isn’t exactly the same process of synthesis that Bayesian generative engines use to ‘create’ products in response to prompting it is close enough that I don’t think you can make a legal distinction based upon human versus ‘bot generated works.

The real problem is that a generative AI can produce ‘new’ work at a prodigious rate that would make Asimov look like a slacker, and can essentially do with without limit or break. Why hire a staff of writers who take a week to create a script for a sitcom when you can have a ‘chatbot’ generate a script in minutes, and if it isn’t the greatest thing ever put to film, well, neither are most sitcoms. If you are a television exec looking to fill out a schedule with minimal costs and are indifferent to salient aspects of the creative process, why not have a ‘bot just generate a season’s worth of scripts in a couple of hours and then have a couple of script doctors run through and smooth out the dialogue?

Stranger

I agree that this is potentially a good argument, or at least the start of one. The current laws are calibrated around the fact that creating a new work, even if strongly inspired by an existing work, still requires a fair amount of effort. AI drastically reduces that threshold of effort. I think this is particularly evident in the case of style transfers. An artist needs a reasonable level of competency to ape another artist, to the point where it’s likely they’re bringing more to the table than just that. Perhaps not so much with AI.

I’m not saying we shouldn’t have different laws for AI–but we should be clear-headed about what we’re doing and why.

I think it is even more difficult than that. Current generative AI works by a user giving the tool ‘prompts’ to direct what it generates, and while the ‘bot is doing most of the ‘work’ of producing an image or text, one could argue that the person prompting it is really the creator insofar as they are doing the critical evaluation and correction not dissimilar to how a modern word processor will correct spelling, grammar, and contextual errors but the writer is producing the semantics. With a generative AI, those lines are blurred but the legal argument would still have the person entering the prompts to be the person who ‘created’ the work and is the owner of intellectual property and (presumably) liable for any misuse of unlicensed works. So, you have a situation that both someone may ‘own’ work that is far beyond their technical skill to produce and at the same time be unwittingly liable for misuse of unlicensed properties that the ‘bot may have integrated without identifying them as such.

How do you deal with that from a legal standpoint? I think there is going to be a lot of interesting caselaw regarding just how much liability an ‘owner’ has for generated content that borrows too explicitly from some protected source, and there will be such a tendency to use generative AI that this will be a frequent occurrence, at least until someone figures out how to put appropriate guardrails in place, or else we just trash a lot of existing IP law as being outmoded anyway since any characteristic that makes a property appealing can be analyzed by an integrative AI system and used to algorithmically determine prompts how to make a property with similar appeal. Instead of having just a few big choices for, say, a fantasy world like Middle Earth or Narnia, your generative AI can take the prompts and produce dozens or hundreds of fantasy settings and then whittle them down to the few with the greatest appeal. A logical end to this is that you end up with these systems ‘gamifying’ everything to maximize popular appeal, and whatever replaces Netflix will basically have a new The Witcher-type show every few weeks, which will rise in the popular consciousness, and then disappear as the next new thing comes along with virtually no enduring appeal or residual IP value.

Stranger

The “creator” is basically the gestalt of all human output on the publicly-viewable internet. For instance, in Stable Diffusion, if you generate an image of a cat, you are drawing off the neural net’s interpretation of “catness” based off of looking at every image fed to it that contained the text tag “cat”. How do you propose to compensate 20 million people with 1/20 millionth of a credit for their image’s contribution to that dataset?

I think the proper solution is to force people to suck it up and realize when they make something publicly viewable on the internet, they have made it publicly viewable on the internet.

Well, it didn’t stop the Avatar franchise from making a shitload of money…

We’ll have to see if there’s a race to the bottom in creating endless new “cinematic universes”. People still seem attracted to known properties. Disney spinning off new Star Wars and Marvel shows at a prodigious rate doesn’t seem to have damaged their popularity. Generative systems may lower the bar on creating new properties, say around ones for which there isn’t quite enough current mindshare, and which might have been too risky to attempt otherwise–but media companies which do nothing but churn out new universes will probably be outcompeted by those that have some persistence.

Is there a comparison to music? Plenty of successful singers of songs have subsequent been sued by those who claim their work was essentially copied without credit. If AI produced a song and the result, though obstensibly borrowed from many sources, was similar to work actually completed, even under current law they might still be on the hook? Of course, the laws will greatly change because of AI in ways impossible to fathom.

…that isn’t my problem to figure out. If they wanted to use my work as part of the dataset to train their AI, then all they needed to do in the first place was ask permission.

The practice of insisting that its the artists fault that their work has been appropriated by tech-bros, so that the tech-bros can make billions of tech-bro money while the artists can literally starve isn’t a new one, but it is getting old.

The proper solution isn’t to force artists to no longer display their work on the internet. Its for the AI companies to recognize and respect the intellectual property rights of the original artist.

Nobody needs to have permission to look at something in public view and learn from it.

…but we aren’t talking about people here, are we. It isn’t “people looking at something in public view and learning from it.”

The analogy fails here. Because we are talking about photo and images and artwork that are protected by intellectual property rights. And it isn’t just intellectual property rights. Its a whole big can-of-worms. If photos and data is being scraped, collected, stored and analysed there are obligations that come with that.

Uber was never really “ride-sharing” or a “technology company.” Its a taxi service.

And AI companies that are scraping, collecting, storing, analysing then profiting from artists aren’t just “looking at images,” and its disingenuous to pretend otherwise.