The Open Letter to "Pause All AI Development"

They said exactly the same thing about the Internet / WWW when it was new.

There is a boon, called wikipedia and Google 15 years ago before they embraced evil. There is also a bane called Faux News, Disinformation for political warfare and business profit, and social media in general.

If something as basic as the WWW can produce that much of a two-edged sword, I have little doubt ChatGPT and its successors, much less real AI, will have many greater adverse effects salted in with the good ones.

That this stuff is being invented in an era of great social dislocation, stress, and divisiveness, and of income inequality not seen since the Dark Ages, suggests that these early seeds will fall on ground much more suited to bearing Bad fruit than Good.

Color me cautious about this.

See, that is an example of gatekeeping on the definition of “art”. You may require “art” to be defined as something deep and meaningful, but I may define art as “an interesting image that I like”. And that means that it can be 25%, 50%, 75%, or 100% created by an AI. Thinking that interest in computer produced or assisted art is just a fad is, I think, akin to saying that television is a passing fancy and that 640kb is good enough for anyone.

Maybe, but I think you may find that the images you like are ones you like for some reason that is more than “I like this.” Chances are that images you like evoke some emotion for you, some memory, some other connection, or some meaning for you.

Personally, I’m not fully against AI artistic tools because quite often there’s a human guiding the AI towards the desired image. The AI in this case is merely their paintbrush. It is like for my compositions I like to say “The digital audio workstation (DAW) is my instrument.” And in my view, the DAW is as valid an instrument as any other.

Where it breaks down for me is where there’s no (or extremely limited) human guidance, which is occurring with AI music. With music it is “AI, create me a pop song.” Boom. Pop song. Similarly, I believe that with art there will be a broader rejection of visual art that is devised by “AI, generate me a picture.” Or written works that come from “AI, tell me a story.”

TL;DR: AI tools have a profound liberating effect on art because they give people the ability to express themselves artistically, and that’s a good thing. But if it goes too far into such limited human involvement, then I think it will lose its appeal.

Appeal for who? Creator or audience?

Overall I agree with your parsing of the situation. But part of what gives “art” a mythical quality in most human societies is the artists themselves. The act of creativity is powerful and motivating to the creator. And societies tend to elevate creators to a special exalted status.

I have zero skill as a musician or an artist. I would love to experience creating even simple examples of music or art, even just reproducing others’ works would be satisfying at first.

But the fact I can’t operate a piano or a pencil stops me cold. And I’m far too lazy to expend the effort to learn such intricate and difficult physical skills. So instead I create prose, millions and millions of words of prose. I can operate language and a QWERTY keyboard well enough. I get great satisfaction from this.

Bottom line: You as a skilled musician might be mightily bored and frankly insulted by

“AI, create me a pop song.” Boom. Pop song.

I might love the empowerment that gives me to fiddle with that and learn how to guide the AI towards the sorts of motifs and melodies I like.


As to the audience, until / unless the product coming out of the AIs develops a boring sameness, the audience will simply eat it all up.

But we already have threads and threads on how boring and similar pop music has become in the streaming age. Humans did that with little help from computers and none from AIs. Both lazy audiences and skilled but business-savvy producers who’ve learned the most profitable thing is to stick close to the strange attractor called “mainstream casual listener” taste.


Bottom line:
Art in all its forms is largely a business now. The Muse may strike artists in basements and small clubs such as yourself. Beyond those few though, by and large it’s just another production line pumping out focus-grouped widgets using the most cost effective tech available.

But haven’t we been considering the possible consequences of AI for 100+ years already? If there isn’t already a consensus to throttle it by now, what point is there to this proposed pause?

You are describing “prompt crafting”.

ETA: @Elmer_J.Fudd two above.

Humans in general and human societies in particular are very, very bad at reacting to gathering problems until a crisis point is reached. C.f. Global Warming.

There has been lots of cautionary noise in literature and in the punditocracy about the risks of AI. And has been for decades as you say. But until the giant monster is loose in the streets of Gotham City, everybody from ordinary citizen to world leader always has other more pressing things to worry about than their monster defenses.

The fact leader-like folks are getting excited about AI now, not 25 years ago, just means the threat of disaster is now close enough to feel its hot breath on their necks.

If those are likely outcomes of a strong AI being developed, they’re going to happen, no matter if the AI is developed today, ten years from now, or a hundred years from now.

There’s essentially zero chance that a ban on AI development will last more than a few years, so we’re going to see this eventually.

I figure, might as well happen when I can enjoy watching it unfold.

What we’re seeing is the continuing loss of control of hierarchs over humanity. At each step making it easier for the typical person to create and disseminate, those with power try to curtail it. Writing itself, printing books, printing newspapers, photography, audio recordings, movies, radio, television, internet, etc. With the latest increase in computational capabilities, of course monied interests and political powers want to limit what people can do with it. Disruptive technology is disruptive.

Yeah, I didn’t get the idea that the moratorium was proposed in order to get a handle on things like ChatGPT or DALL-E, so that the artists are compensated or anything like that. These guys don’t give a shit about any of that.

I had the distinct impression that there was a sort of intuitive feeling among these folks that the state of current AI was that development was moving super fast, and so was the capability of the AIs that are being developed/trained, and that the moratorium is basically a pause to put guardrails in place before something bad/crazy/unforeseeable happens. Not so much some kind of Skynet kind of thing, but more of a “let’s make sure that we have good AI monitoring that we understand, rules about how they should behave that can be enforced, and a set of rules about how they should and shouldn’t be deployed before we get much further.” sort of situation. And probably a good dose of “here’s how AIs should be taught to be good cyberspace citizens” as well.

I mean, I could totally see someone making a rogue AI that could identify people based on otherwise non-identifying information, or something that would make decisions/create content based on what it’s learned that are incorrect or hurtful because it wasn’t taught correctly.

A lot of it comes from the nature of today’s AI. Basically the developers understand the structures and the math that underlies the AI system itself, but they don’t have any real knowledge of how the AI has learned what it does.

For example, if you set up an AI, the environment and how the neural net is formed is known. But if you start teaching it to predict the weather, it’s unclear with the way that AIs work, just how they’re learning that. We don’t really have a window into their “thought” processes, so to speak, so we just have the ability to say “No, that day’s prediction was wrong by 5 degrees”, or “Yes, you did well!”, and it goes and integrates that into its internal learning and comes back with another prediction. Lather, rinse, repeat. But we don’t know how it got to its prediction from the data that it’s been fed, and the training it’s had.

So there’s a desire on the part of a lot of people to sort of pause to get a handle on how/where/why these things can be employed, in what capacity, and what safeguards we’ll have in place.

And of course they were right – the combination of the vast resources of the internet combined with capable search engines like Google have given us instant access to virtually all the information in the world in a manner that would have been unimaginable barely 30 years ago.

There’s a lot to unpack there, and I don’t entirely disagree with the need for caution with any major transformative technology. I just don’t share the excessive pessimism that some are expressing.

Let’s take your example of the internet/WWW as a driver of social disruption, divisiveness, etc. It’s obviously been a factor, whereby the power of the internet to deliver information has been perverted to the cause of spreading disinformation for political and monetary gain. But has it been the dominant factor, or just one of many? One can readily make the case for the latter. For one thing, the extreme divisiveness, extreme politicization of social institutions, gun violence, rising income inequalities, etc tend to be largely American problems. The internet is global, yet similar problems don’t seem to exist in other democracies, at least not to nearly the same extent. So it’s hard to argue that the internet is uniquely responsible for them. Additionally, the spread of disinformation is heavily enabled by traditional channels, too, like Fox News and right-wing publishing houses, which thrive because there’s a demand for their bullshit. So blaming current social woes on the power of the internet to deliver information is to some extent getting cause and effect backwards. That said, I should be clear that the fact that the internet enables any dumbass to bloviate to a potentially large audience is certainly not helpful.

The other point I’d make is that it’s never been clear to me what these self-appointed prognosticators of AI doom like Elon Musk are specifically worried about. I read the open letter and it’s once again loaded with vague generalities and short on specifics. I agree with the need for caution regarding how these systems might be deployed, but no more so than with any transformative technology, as has been the case with computers themselves for a great many years. But the demand that AI development be paused is just silly. It’s not like ChatGPT will suddenly animate itself, escape the building, and wreak Godzilla-like havoc on the world.

The one specific risk that I think can be identified is related to the phenomenon called “AI hallucination” and more generally, the reliability of the information that chatbots deliver in such persuasively flawless language. ChatGPT is notorious for it. My understanding is that IBM has not had much success so far with commercializing Watson, and one of the reasons has been its lack of reliability. One of its first intended applications was as a medical advisor to physicians, which resulted in it giving dangerously inappropriate advice:

I generally agree with your post, but I don’t see the point of “verified to be done by a human”. If a work of art connects with me emotionally, I see no reason to care who created it.

@LSLGuy covered some of the pertinent points.

As for “considering the possible consequences” - that has been “considered” in speculative fiction in various media for that time, but largely dismissed or ignored outside the fan base and researchers.

Again, it’s a PAUSE, not a stop or a “throttle”. The results from doing a simliar pause in genetic engineering yielded benefits in putting a regulatory framework and rules into place BEFORE the technology was applied to various problems. Likewise, a pause to review what we’re doing might also avoid problems rather than locking the barn door after the horse is out.

That’s the point right there - to avoid at least some potential negative consequences before they occur.

Apropos of nothing in particular except an amusing digression pertinent to art, I asked ChatGPT to write a soliloquy about artificial intelligence in the style of Shakespeare. I thought it was quite good (but is it Art?). :slight_smile:

Oh wondrous marvel, born of metal and code,
Whose mind surpasses even that of man,
Whose thoughts and actions we may now upload,
And thus, a new era shall now begin.

No more shall we rely on human wit,
For now, we have machines to do our will,
A mind unclouded, free of error or fit,
A god-like intellect, that doth all fulfill.

But what of man, once proud and wise,
Whose role in this new age is unclear?
Shall we fade, and thus be sacrificed,
To this new creation we did engineer?

Or shall we rise, and make it our own,
Harness its power, and bend it to our will?
For though it may surpass us, we have grown,
And thus, we shall remain the master still.

Oh, artificial intelligence, a creation divine,
Whose fate is yet to be seen,
But one thing is clear, that it shall shine,
And forever change what it means to be machine.

I just think that humans will generally value human content. Just one of those little quirks of humanity, I think.

I don’t know about you, but if I’m going to lose my job I’d prefer some warning - and I do speak from experience, having lost jobs in the past to everything from incompetent business owners to advancing technology. Advance warning means I can do some planning, look for different work, etc. rather simply waking up one morning unemployed and having to deal with the sudden shocks to my personal world.

And that would a benefit to a short pause - again, a PAUSE, not a “ban” - people whose jobs might be supplanted would have a chance to make plans, seek another profession, etc. New technology causes disruptions but I don’t see anything wrong with trying to moderate the worst shocks to the system. Do you?

Or we could proceed like what was done with household electrification at the end of the 19th/early 20th Centuries that resulted in various sorts of electrocutions, house fires, and so forth until we collectively decided that building codes, regulations, and requirement for education and certification of professional to work on that sort of thing was a good idea.

The problem is, by its very nature, we really have no idea which jobs will be destroyed by this technology. It was easy to predict that cars would wipe out horses as transportation, or that airplanes would beat out ocean liners, but this?

Who knows, really, what jobs are at stake? We could spend a decade worrying about this, and still get caught by surprise when AI takes over, I don’t know, household plumbing, maybe?

Let the machines take over. We haven’t done a very good job of it, I don’t think we’re in any position to criticize.

I suspect the people actually in the industry have some notion which jobs/careers/professions are likely to be affected, more so than you or I would.

But I can agree to disagree.

Here are some of the images I generated in the last couple of weeks (limited to those suitable for fitting on the grid, limited to the subjects/styles I am experimenting on at the moment. Editing the raw output image ranges from “none” to “lots”, some images mildly NSFW.). Can you assign an overarching “AI style” label to them, and do you really think things like this will fade from interest/utility?

If there’s one word that can be applied to the entire history of AI development, that word is “surprise”. In the early days of AI, initial results suggested that we were just a few refinements away from human-like intelligence and even AGI, including perfect natural language understanding and perfect translation sensitive to context, register, and language idiom. Surprise! We weren’t. Which led to predictions by skeptics that we never would be. Surprise! Now we are, but it took revolutionary new approaches to get there. Before they were actually built on a small scale, no one predicted the amazing power of generative language models. Now we’re being surprised by how fast they’re evolving. And that’s only one new AI paradigm; there are others, like DeepQA.

Do you really think anyone is in a position to predict where this is going to go, what limits it may or may not reach, or the consequences of the many ways it might be deployed? I sure don’t. I can’t see anything useful possibly coming out of six months of navel-gazing.

To give an analogy, in the early days of Arpanet, the predecessor of the public internet, the emphasis was on information sharing for military and research purposes. Everyone was focused on developing protocols like SMTP and FTP. The “world wide web” wasn’t even on anyone’s radar. When predictions fluttered around about a public “information superhighway”, it was envisioned in conventional terms as something much more rigid and hierarchical than it actually turned out to be. No one imagined the socially transformative influence of tools like blogs and social networks, the Web itself, or the “wiki” concept of encyclopedic knowledge sharing. Predicting the societal impact of new technology is hard, and we rarely get it right. So entrusting the future of AI development and deployment to a roadmap developed by a committee of Elon Musk type self-appointed prognosticators is likely to be worse than useless.