The Open Letter to "Pause All AI Development"

In pretty much any forum where AI art is published or discussed, you have people jumping in declaring that it isn’t art, you aren’t “creating” anything, and it is ruthless theft by evil “techbros”. Many of them are very angry and offended about AI art even existing.

The Economist has published a cautionary piece by a noted techno-philosopher

Yeah, I’ve seen that quite a lot. I think some of it is simply anger at the fact AI has wandered into the marketplace for everyday consumer ‘art’, and has bitten off a big share. In the consumer space it matters not a jot whether it’s truly ‘art’, if people - the audience/customers - still consume it as though it is.

But some of it comes from concerns about the huge volume of copyright work that has been used as training data.
It is often argued that this material has been incorporated into the AI models in a way that violates copyright. Often this argument goes hand in hand with the assertion that AI image algorithms are just creating photo collages of existing work. That claim is technically false and IMO undermines the credibility of the other argument about copyright when it is used together.

The counterpoint is that the algorithms only looked at the art during training, and learned what it looked like. Much like I could wander around an art gallery, study the works carefully, then attempt my own work in the same style (albeit with less competence on the exactness of execution than some AI algorithms can achieve)

I’m on the fence between those arguments. I don’t personally think that getting a machine to learn the look of a thing seems very different from getting a human to do it, but on the other hand, the massively bulk nature of the usage of huge swathes of copyright material, without even asking for permission, seems like the sort of dick move that good copyright laws should be there for.

Yeah. The ease of bulk copying / exploitation is a quantitative difference that delivers a qualitative change.

As a comparison …
Back in the e.g. 1950s & before nobody thought a telephone directory should be copyrightable. It was just names and addresses, “just data”. And they were not copyrightable as a matter of settled law.

Then as primitive computers became available it was possible for upstart companies to scrape the phone company’s data and publish their own competing books with next to no cost of data gathering. Whereas the phone company had gone & continued to go to much effort to maintain & curate their data.

Soon enough there were court cases and “mere compendiums of simple facts” became copyrightable under the doctrine that they represented the culmination of great effort that deserved protected reward. Effort that others may not freeload off of by using power tools that made freeloading almost effortless.

The asterisked part was the innovative distinction that formed the legal difference. Prior to the existence of the power tools, the sheer difficulty and therefore cost of exploiting the freely available data was deterrent enough to protect the data creator’s interests. Not anymore.

AI & training data whether it be visual art, music, books & magazines, or 'Dope posts, is the current battleground of the very same war. When newly-created power tools make bulk exploitation for private profit possible where it had not been before, what should change in response?


As to visual arts specifically there’s a certain cynicism at the heart of it. Much of art and artists is simply people producing a consumer product as a business venture while making pious noises about The Muse. It’s less industrialized / conglomeratized / commodified than the recorded music industry, but the same commercial logic is at the heart of much of it.

The romance of The Muse (and the high prices that successful Muse-channeling can command) sits awkwardly facing off against the cold computers slamming out Muse-worthy works by their thousands per day for pennies apiece.

I also think it is very often Dunning–Kruger people who know very little of how the AI training works but who are eager to flail away at their paper tiger. I think they believe that someone specifically set out to deliberately copy the works of specific artists for the purpose of creating a prigram for copying artists. Which of course isn’t remotely the truth. The AIs are trained on hundreds of millions to billions of tagged images with ones listing the name of a specific artist in the tag being a (probably relatively minor) subset of them. The fact that plugging in the names of specific artists happens to generate images broadly similar in style to the works of that artist is no more a specificly targeted result than the fact that plugging in “frog” generates an image that is broadly similar to a frog. “Flower”, plaid", “round”, “giraffe”, and “Rutkowski” are all equivalent data points for the neural nets to attempt to pattern search. It is just that Greg Rutkowsky is more likely to sue than the repeating color pattern “plaid” is.

For myself, having an AI that can create anything that I can describe has been my dream as long as I have grasped the concept. The one specific media reference that comes to mind is a scene in an episode of Star Trek The Next Generation.

That is the kind of AI I want to have, and say to Hell with anyone trying to stand in it’s way.

Case in point:

From a legalistic standpoint, it is true that using legitimately sourced intellectual property to ‘train’ a generative AI is essentially no different from an aspiring human artist studying the style and technique of others and then imitating what they have learned until developing a unique style (or just being a hack). Similarly, there is really no restriction from taking voice samples from a paid vocal artist who has agreed to ‘reuse’ of the sample beyond the initial work, and then training a vocal language AI to reproduce an indistinguishable copy. But there is an enormous underlying problem that unrestricted use of AI to do these things will put most artists essentially out of work and generally suppress the generation of novel creative content by virtue of the fact that the style of it will be almost immediately duplicated at essentially no cost.

And this is the general attitude that is the root of the real problem with generative AI; it is so easy to use, and produces results at virtually no cost or effort, that it is already undermining real creative industries and may come to completely devastate many forms of creative content development. It’s like stealing out of the candy bin; if only a few people do it once in a while it’s just part of operating costs, but if everybody does it and nobody pays, it is going to go away.

Stranger

Just adding, I started using generative programs in the early 1990s with VistaPro for DOS. It used USGS Digital Elevation Maps (the program came packaged with more than one CD packed with them) or home-rolled TIFFs for creating lamdscapes. And used fractals to generate various types of trees. You could create individual images or create a flight path (specifying not only the directions but elevations, speeds, and camera orientations at each marked point) and create flythroughs. A few seconds of video would take days to render on my 486. Here is an example video (not by me).

Bryce was another cool generative 3D program from back in the day (1994):

Yep, I’ve used Bryce. And Poser. And Blender. And Maya, 3DS for DOS before it became Max, PovRay, AutoCAD, I think several others that aren’t coming to mind…

And I just remembered Microlathe, a little program that just let you draw “lines” and rotate them about an axis into a 3D object. I even played with Pixar Typestry back before the whole “movie studio” thing.

I’m not sure I’d call most of those generative. POVRay had its Perlin noise function that could be used for procedural texturing and such, but it wasn’t really capable of producing full environments like Bryce did. Pixar Typestry was cool, but really just a beefed-up version of Microsoft WordArt.

Anyway, all good times back in the day. I may even have some of these still floating around in my archives.

I’m sure I still have all of them. And my old 1990s CD-Rs are for the most part still readable. I found that out recently while shuffling through a pile of discs looking for a specific program that I needed for a Windiws 98 laptop for someone (itself needed because he had a valued program that wouldn’t run on later versions of Windows).

This doesn’t make sense to me, and in fact seems to be stretching credibility just to shoehorn the dangers into this discussions.

I saw a lecture Max Tegmark gave a year or so ago about the dangers of AI and the need for a 6 month pause. A 6 month pause would do nothing except give China and other bad actors a leg up on us. Further he predicted the doom of humankind. The authority with which these people people speak, is appalling. Without one shred of evidence, they breathlessly assure our doom.

Just this year Eliezer Yudkowsky proclaimed this:

“the AI does not love you, nor does it hate you, you are made of atoms it can use for something else.”

“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

Without a single fact to back any of that up. And there are many, many others.

Without even having created AGI, they speak as if they know how it will act. Sure, it’s possible to build a terminator (I suppose), but why would you? Why not build a super-intelligent Buddha? This whole ‘genie out of the bottle’ stuff regarding AI is scare-mongering of the highest order.

Explain to me please why an AI can have the abilities to have a clear objective, would be able is able to model reality, be able to react to changes in the world and adapt its actions, actively solve problems and overcome obstacles, be configured to try to optimize solutions (and not build a huge expensive furnace to heat the water) and not be able to understand and carry out a simple command like and don’t harm humans in the process?

We aren’t talking about non-thinking robots without the ability to process and choose, the whole idea of AI is, while they are bringing you your tea, that they CAN decide not to step on babies. That’s intelligent part of AI. In fact, AI is supposed to be smarter than humans. Why would it make that kind of stupid mistake?

“Then ‘not doing what we hoped’ leads naturally to becoming a threat to us” (Emphasis mine)
Sorry, no. There are no facts to assert this conclusion.

As a side note, Max Tegmark, has also claimed, quite confidently, that there is no other intelligent life in the observable universe - not just our galaxy, but in every other galaxy in the known universe. Should I take his word for that as well?

Another one of my fun toys back then: Elastic Reality. I was in college with a credit card, a guy I knew was in high school without one, and he had me use my card to spend several hundred of his dollars earned working at Electric Ave. and More to buy a copy (which I of course copied for myself). I always wanted an Amiga Video Toaster back then, but it was too rich for my blood.

I will never forgive my parents for not buying me an Amiga. (I kid, but I really, really wanted one when I was 10 and still stuck with a Commodore 128).

I have explained it You seem to just be ignoring what I’m saying and talking past me. Nobody said anything about building the terminator. That’s nothing to do with anything I said.

If you make a machine that is very good at giving you exactly what you ask for, you’ll get exactly what you ask for.
The problem of making sure we completely understand the full consequences of what we want, and ask for it absolutely unambiguously, is not a solved problem.

Sure, you can try to specify ‘don’t harm humans’, but unless you have singlehandedly solved the alignment problem, you will inevitably specify it in a way that has potential consequences you haven’t thought of, but you’re specifying it to a machine that will give you exactly what you asked for, and when you realise that you’re not getting the fuzzy ‘what you think you actually wanted’ version that was in your head, what are you going to do?

“You’re racing toward the edge of the cliff!”
“Yeah, but if we stop someone might get ahead of us!”

You keep demanding for ‘facts’ against statements are are by definition speculation about the future, and cherry-picking the most hyperbolic claims as evidence that these notions are unfounded. In fact, there is plenty of well-grounded and developed issues with artificial intelligence by people who have worked in the field for decades, many of which have been reiterated here. It doesn’t even take an AGI to present a significant threat; just something that is widely-distributed and upon which we have become dependent such that no one will be willing to ‘pull the plug’ (even if that option is available) could present a substantial threat, and of course human users of generative AI can do great harm even though the technology itself has no volition. A true AGI is not going to ‘think’ like a person does, or protest human interests just because it is intelligence, and there is no reason to believe that we can instill an emergent machine cognition system with anything like human ethics. There are plenty of reasons to proceed with caution, which of course enthusiasts dismiss with abandon “…and say to Hell with anyone trying to stand in it’s[sic] way,” is exactly the kind of refusal to consider any consequences or issues before the technology becomes intrinsic to our society. It is as if someone today were presented with a new energy source that is carbon-free, but it might dump massive amounts of chlorates into the environment, and then fail to even reflect upon whether that might be an issue we can deal with or not.

Stranger

An Amiga 500 setup was my first “real” computer. Bought used, it came with my first color printer–a tractor feed with a 7-color ribbon that was lifted into various positions as needed (I don’t remember if it was 9- or 24-pin.) It also had a color scanner–a tube-based black and white security camera mounted on an adjustable sleeve on a pole on a board with a pair of florescent ring lights mounted on the sides of the board. Beneath the camera lens was a rotating wheel with red, green, and blue plastic gels plus an open space. The camera plugged into the back of the Amiga and to take a color “scan” of something you would choose the appropriate red, green, or blue menu item from the image editing program and capture three images from the video camera, rotating the gels for each color channel, which were then combined.

That we are racing towards a cliff’s edge is not a given. We could be racing towards a utopia. I could just as easily level the criticism of “cherry-picking the most hyperbolic claims as evidence”, as you did, to this line of reasoning.

And yet, those people haven’t been killed off yet.

There is also no reason to believe we can’t either. Until AGI is actually realized, the truth is no one knows the extent of what it’s cognition will be. Further, I didn’t claim it could “protest human interests just because it is intelligence”. Responding to the litany of Magetout’s assertion of the things that could be programed into AI, I said it shouldn’t be that hard to also program it not to kill humans. I stand by that.

None of that applies to me. I am not an enthusiast. I’m actually kind of ambivalent about how fast we develop AI. My main involvement in this thread is addressing the attitude “of course AI will kills all humans”. History is full dire predictions that didn’t materialize. I doubt we’ve gotten better at it. It’s all speculation at this point with no evidence of AI turning on humans and killing them, intentionally or not, to date.

I guess it depends of your definition, but I think true AGI is many decades off.