The Open Letter to "Pause All AI Development"

A bunch of IT industry fat-cats, techno-luminaries, and BTW Elon Musk, have released an open letter calling for an immediate moratorium on further AI development:

I’m not sure yet what to think. Are they spooked? Have they seen something real in the lab that we on the outside hasn’t seen yet? Are they angry because their own efforts are running behind the leaders and they just want to hobble the competition until they can catch up? Have they considered how an AI working for e.g. Putin might upset the apple cart that their great wealth depends on? I know I do not know. Yet.

I can’t say that I can take an informed position. As a matter of Board arcana, I can’t even say whether this is a Great Debate or an IMHO. But since I did not see anyone talking about it yet, I wanted to throw it out as a topic of its own in one category or another.

Almost two weeks ago I poo-poohed the idea any sort of singularity was imminent here:

Was I more wrong than I knew?

The whole gambit of intellectual property rights (and responsibility) e.g. patents, copyright, defamation, plagarism and the legal framework which held it all together in what was considered fair and equitable manner is now a free for all.

A tech savvy primary schooler can now produce a novel in the style of Hemmingway, and claim it as their own work. Or a “new” manuscript of prize-winning quality is published and nobody knows who is the author.
In the same manner this can apply equally in the corporate, legal and political spheres.

This is just performative hysteria. The risks to society from automation are not specifically from AI but from the same places the risks have always come from – the incompetent implementations of automation.

The fact that recent large language models are (as their name suggests) exceptionally adept at parsing and producing natural language is perhaps a challenge for school administrators insofar as students could use it to cheat on certain kinds of homework assignments, but it’s hardly an existential threat to society. I’m sure that if equipped with voice recognition and speech synthesis these sorts of systems would freak out the doomsayers even more. It’s all about misguided perceptions.

Their tremendous benefits to all facets of society as informational assistants far outweigh those issues. It seems to me that these capabilities take us into an unprecedented new world of accessible knowledge analogous to the revolution created by the Gutenberg press.

There are some very impressive names on that list (and of course some less impressive names like Elon Musk). For that reason, I’m not really sure what is driving it. Certainly somebody like the CEO of Stability AI, might like to slow down his competitors so he can catch up. The CEO of Getty Images would very much like to see people stop using producing AI art so easily. But there’s some serious scientists on there who are not given to quackery, and for whom I have great respect. I’m just not sure what they’re seeing. ChatGPT isn’t a threat to humanity in any significant way.

That’s what a killer robot would say.

Students have been able to do that since 1926. And they’d have been about as likely to get away with it then, too.

Fortunately the early models have rubber skin and are easy to spot.

Faux News has already flooded us with propaganda. Russia runs disinformation campaigns. There are already deep fakes out there.

Trying to cut off the source is misguided. Need to teach sheeple to have a critical mind.

Most people don’t even have a toe in the AI water.
I’m thinking that these cheaters are Early Adopters of AI.

I just had a student come to me this morning, showing me “his” speech script, so he can avoid sitting in detention from noon to 12:30. I glanced at it and said, “This is not your work. We’ll work on it in class (11:15 to noon)”. He insisted that it really was his work. That is, he insisted until I said, “Do you really want me to show you where you copied that from? I can find it quicker than you did.” Then he fessed up.

The administration at my school is making a big deal out of ChatGPT. We’ve only been able to get a couple of such programs to work on this side of The Great Firewall so we can see what all the hubbub is and how it will affect our students and thus our teaching.

“Never let your sense of morals prevent you from doing what is right.” ~Salvor Hardin

I like the quote, but looking around I don’t think many are taking the advice.

That’s why it’s in fiction.

Well yes. There have been literary fraudsters from way before 1926.
But they would have at minimum needed to type the manuscript out, and have read at least one.

Now they can produce an entire career lexicon before breakfast unseen and unread.

The stuff in long form (novel/essay) gives those more familiar with the original body of work a chance to detect inconsistencies. The 800 word faux infotorial, much less so.

Look Darren, I can see you’re really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.

Stranger

If they make it easier to fake creativity and can make false works of art and literature, that will trivialize art and literature and make the real thing worth less. The arts actually do create new things, they are much more important to the human spirit than any “accessible knowledge” of dull facts, and diminishing their value or compromising their integrity is one of the worst things the technology segment of humanity could do to the rest of us.

If it is good enough to satisfy the audience, what makes it fake or false, and the other “the real thing”?

People have long said that you can’t gatekeep what is art. Random splashes of paint on a tarp? Art. A bunch of squares? Art. Poop in a can? Art. But now that creating interesting and complex visual imagery is being democratized and made available to everyone, the “what is art” gatekeepers are pouring out of the woodworks. It is the fear of the guy selling ice cubes on the beach when he notices the mile high iceberg looming up on the horizon.

Back when genetic engineering - which has yielded great benefits to society and civilization - first became viable the people involved instituted a brief moratorium to consider ramifications and propose ethics and safeguards. While not perfect, I do think the pause and the structures put in place have also been of benefit.

I see this as somewhat comparable - a pause to consider possible negative effects and provide some structure does not seem unreasonable to me. Like any new, powerful technology there are potential downsides as well as upsides.

And yes, there is fear in some quarters. Fear of loss of jobs, loss of income, loss of reputations for deep fakes, fear of undue influence in certain areas. I think there is some merit in a pause to address those concerns as well.

I think it’s exactly the opposite. The invention of photography fundamentally transformed the art world, and for the better. Photography took on the task of humdrum realism, while artists moved on to new creative styles like Impressionism, Cubism, and completely non-representational abstraction. AI is no threat to art. Art will thrive as long as there is humanity.

A lot of AI art has no personality. That’s one of the main ways I can identify AI writing, and AI music (AI visual art can be harder for me to recognize unless there are notable flaws which are common in AI art such as extra fingers). This is pure speculation on my part, but I think what we might end up seeing is a higher demand for verified human art. While of course there is some fascination with AI art, it is the novelty of “Really? A machine did this?” more so than the creation itself. I think eventually that novelty will wear off. Art is something that connects humans together with our shared experience that is difficult for current AI to duplicate (and I think it will continue to struggle with this until it is more alive in a sense). So music, writing, artwork that is verified to be done by a human will have an extra appeal.

Beep