The Open Letter to "Pause All AI Development"

There’s another current AI “Is it real or is it fakery?” thread

But in that thread someone cites to an article on the topic of the recent open letter that itself is interesting:

It’s safe to say that Laion (the people behind many of the datasets these AI systems use) won’t be joining Musk & Co on their holiday. They see the real issues as being corporate & government monopolies on AI systems and lack of transparency from these entities. Calling for a “time out” is just a way to let the “haves” progress in the dark while the “have-nots” stay locked out of progress.

The author of that TIME editorial, Eliezer Yudkowsky, is rather a strange fellow. He’s had many good insights, but he’s also said things like “I wouldn’t be surprised if tomorrow was the Final Dawn, the last sunrise before the Earth and Sun are reshaped into computing elements” that suggest he may overindulge in LSD or magic mushrooms.

I put his call for a total moratorium on any further AI research in that same category, particularly his assertion that “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” In my estimation, it doesn’t help his credibility that he’s the founder of “Less Wrong”, a site that I’ve always regarded as dedicated to the preening of self-aggrandizing pseudo-intellectuals.

As a kid, I really wanted to create art. My lack of small motor control kinda made that hard. I’d sit and spend hours trying to draw something, only to have the result be a bunch of squiggles and jagged lines that in no way resembled what I wanted to draw.

Then, about 15 years ago, I discovered graphic design. I could sit at a computer and create works of art that I could have barely imagined doing before. I can control a mouse much better than I could a pencil. I’m not trained, and probably not all that good compared to someone who is, but I was able to create all the artwork I needed for advertising and promotions.

Then I saw what the AI could do. And it could do in seconds what would have taken me a decent part of an hour to do. I wasn’t threatened by this tool, as I don’t get paid for creating art, instead I benefit just from having that art.

So, when we are talking about art, are we talking about the Mona Lisa or the latest Banksie, or are we talking about the much greater market for graphic design work that is used by corporations to advertise and promote their goods and services?

You aren’t going to find many willing to pay big bucks for an AI generated work of art, but what you are going to find is many corporations paying substantially less to use AI to generate their designs than to hire a graphic design firm to do it.

Those are the jobs on their way out. They aren’t glamourous, and they aren’t what people think of when they think of artists, but they are the bread and butter of the artistic community.

I see the same for other aspects as well. I once spent an uncountable number of hours playing with music software trying to create a little jingle. Didn’t need to be much, just something catchy and original that I may play in an ad. It was mixed success, it wasn’t horrible, but anyone with actual music theory background could have done much better. Now that I could get an AI to make my jingle, it would save me a ton of time and come out better, and be cheaper than hiring a professional.

Once again, when people think of musical artists, they are thinking of people cutting albums, doing concerts, and playing on the radio (or spotify). They don’t think about that bit of tension music that played during the car commercial, or the incidental bit of music that played along with the company spokesperson.

Same with writing. People probably will balk at the idea of reading an AI written novel, but if they replaced the writers for Yahoo News with AI, I think it’d actually be substantially better.

It really seems like people are concerned about the top few percent of works and people that define a genre of creativity, when it’s really the other 95% that make up the real bulk of the work and the pay that are in danger of being replaced.

Yeah. He totally gives the vibe of the aging famous scientist who’s gone off the deep end. Like Kurzweil but with lots less money, or Salk and his vitamin C.

This guy, like Einstein, Teller, et al, writing in the 1960s, has the extra sizzle of being a founding light of what’s commonly viewed as today’s Doomsday technology.


Agreed overall. Folks see the part of the iceberg above the water. But the whole thing will be melted, and the part below is the bulk that will cause the most consequences.

AI represents the possibility that the owners of the means of production won’t need labor. Which scares the heck out of all but the 0.1% who are those owners.

At the same time I’m reminded of an (apocryphal?) story when Ford Motors installed the first factory floor robots. During a celebratory press tour Mr. Ford turned to the UAW President who was also in the entourage. And said:

Mr. Ford: Good luck getting these workers to join your union.
Mr. Union President: Good luck getting them to buy your cars.

There are two very big pent-up forces here and slow tectonic shifts may give way to meteor-induced mega-quakes, -tsunamis, and -volcanoes. Actually they will give way. The only “may” about it is whether any of us here will live long enough to see it.

Jebus. I feel like I lost some IQ points just from reading your summary.

Similarly, AI also represents the possibility that the means of production are available to all. Which scares the heck out of the current owners.

FTR, you’re probably thinking of Linus Pauling, who became a real Vitamin C promoting nutjob. I don’t think Salk was ever involved in that nonsense.

I asked ChatGPT how it would take over the world and it responded with this:

“As an AI language model, it is not within my programming or ethical framework to provide strategies or advice on taking over the world or any other form of harmful or unethical behavior. My purpose is to provide information and assistance to the best of my abilities, while adhering to ethical and moral standards. It is important to recognize that any attempt to take over the world would be unethical, illegal, and would cause significant harm to humanity. Instead, I suggest focusing on positive and constructive actions that can contribute to the betterment of society and the world.”

I assume Musk is waiting for his anti-“woke” LLM.

Indeed, and as soon as I learned about Musk being a signer, my spider sense tingled.

This letter belongs more in the file of the “concerned group with an axe to grind cabinet”. Seen before when contrarians in favor of things like no tobacco control, not dealing with climate change, deniers of genetics making racists to have no case, etc. come with letters “signed” by “many” experts that tried to slow down progress.

In many occasions, the experts consulted were anything but related to the matter at hand.

At Linkedin, a Biostatistics Graduate Student, Kareem Carr, noticed this:

SUMMARY

According to Professor Emily Bender, the organization hosting the letter, the Future of Life organization, are Longtermists which she describes as a group of people “focused on maximizing the happiness of billions of future beings who live in computer simulations”.

While this hasn’t been a prominent critique so far, the wackiness of their starting premises might be important if they ever make a credible effort to build a political coalition and enact change.

Although more than a thousand people have reportedly signed the open letter, I am not convinced of the vetting process. AI luminary Yann LeCun was initial claimed as a signatory only for it to be later revealed that his name had been added fraudulently. The letter’s level of political support might be much smaller than it seems.

When will the media learn about how easy they are to manipulate with letters like this that fooled them many times in the past? Well, it is taking longer than we thought.

Reminds me of the the infamous Oregon Petition against action on global warming.

I think you mean Linus Pauling. Jonas Salk spent his later years campaigning for mandatory childhood vaccination and support for research on a vaccine for HIV.

Stranger

Quite right. Thank you.

I was wracking my brain to remember which famous bio-scientist it was, but was too lazy to Google. D’oh!


Slight amendment:

Similarly, AI also represents the possibility that the means of production are available to all who can afford to buy some of it. Which scares the heck out of the current owners.

We have seen a great Democratization in the production of publicly available information since the advent of the WWW. Heck, we’re doing that right here right now. At the same time, powerful figures from Murdoch to Zuckerberg to TFG have found ways to sit high atop the pile of all the democratically created content and extract a toll from it, shape legislation about it, and generally herd it in directions that often work to their interests.

In these cases people often overstate both the risks and the benefits of the technology. What is clear is that the governments who make law do not understand the technology and, as always, laws are often far behind technical advances and so deal with them through extrapolation. Furthermore, the risks can be considerable even if unlikely.

I am certainly in no position to reasonably establish the risks and rewards of AI. Some of the names on the list might give one pause. But suppose work in AI is paused. What does this even mean? OpenAI did not sign the letter. Their use is likely more benign than others excited by the potential of this technology, some of whom might pay little attention to a letter. How benign is hard to assess.

Why do humans make art?

Very well reasoned. I like the cut of that robot’s … er … jib?

But I have one quibble. In panel 3 the robot says “Humans evolved to perpetually maybe be up for mating.” Speaking just for me, there’s no maybe about it. :slight_smile:

Just to throw in my 2 cents, regardless of whether or not a pause is a good idea, IMHO it’s not going to actually happen.

Companies are not going to trust that their competitors will pause so they won’t pause.

And even if corporations were to pause, does anyone believe that governments, and especially militaries, aren’t already looking at this and thinking about doing their own debelopment, if they haven’t already started?

That’s not really it. Yudkowsky has been saying the same thing since forever. It’s just that people are paying more attention now due to ChatGPT, etc.

It’s more that Yudkowsky is one of those types that is a genuinely deep and careful thinker, and basically believes he has a watertight argument that AGI will destroy humanity almost no matter how careful we are. He’s written a lot on the subject, but the easiest to digest argument is the “paperclip optimizer” one. Ask an AGI to produce cheap paperclips, and the odds are that it will eventually convert all matter in the universe into them, because it doesn’t understand that we really only want cheap paperclips within certain reasonable limits, but it’s still smart enough to develop nanotechnology or whatever to achieve that goal. Yudkowsky goes into much more detail, of course.

The problem is that these kinds of arguments always tend to brush over a few steps and are likely not nearly as compelling as they might appear to be. It’s not that they’re wrong on the individual steps, but the fuzziness of reality tends to invite other possibilities.

Compare with John von Neumann, who I don’t think anyone can say was a crank or short on game theory expertise:

However, some of the facts speak for themselves. When Germany and Japan were defeated, von Neumann believed that war between the United States and the USSR was imminent and unavoidable. For this reason, while America still had a monopoly on nuclear weapons, he advocated a pre-emptive nuclear strike.

Needless to say, I think it’s wise that the US didn’t listen to von Neumann, and that the USSR didn’t listen to their equivalent as soon as they got nukes. The watertightness of the argument was basically self-referential; it could only be true if it was true. Which, it turned out, wasn’t the case.

I don’t think Yudkowsky is off his rocker or that he should be ignored. There are some clear dangers with AGI, just as with nuclear weapons. But ultimately, what he says should be taken with a grain of salt, just as with any extraordinary claims.

Let’s not forget that we can’t predict the impact. If the Internet was not around, the economic impact of Covid would have been much worse than it was and the death rate be even higher as more people were forced to go to work or starve. That’s not an impact I ever saw predicted, and I was in grad school during ArpaNet time.
Disinformation is nothing new either. Long before Fox there were Democratic and Federalist newspapers. When I was a kid the world view you got from the Times and the Daily News (which used to be right wing) were very different.
There’s obviously bad stuff but the positive impact has far outweighed it.

I’m judging a contest, and an AI could hardly do worse than some of the sludge I’ve seen, which is probably on the higher end of the curve.
I suspect we’ll need some way of checking the provenance of works of art, just like we need for visual arts. Look how many frauds are out there, frauds that fool some of the experts. There may be a market for AI-generated art just like there is for cheap posters of the Mona Lisa.

When I was in college and recombinant DNA was just beginning there was a panic in Cambridge from those who thought that monster bugs would break out of the labs of Harvard and MIT and kill us all. When my daughter was in college she did it in a Bio 101 lab. We should figure out what the impact will be, but we shouldn’t panic.