The Open Letter to "Pause All AI Development"

Each trained AI comes to unique conclusions and produces unique output. The number names are deceptive in implying that they are revisions to existing code (like going from Android 11 to Android 12, or Windows 3.0 to Windows 3.1) when in reality each new “version” is a completely different thing. In image generating AIs, for example, the output you get for a prompt in Stable Diffusion 1.5 is extremely different from the output you get in Stable Diffusion 2.1 is extremely different from the output you get in Stable Diffusion XL. Each has strengths, each has weaknesses. None “replaces” the others like in traditional software releases and are used side-by-side. So a judge declaring a specific AI to be infringing is in fact outlawing a specific, non-fungible thing.

I’m skeptical about how well they’ll be regulated. You can already run LLMs and Art AIs locally on your home PC if you have even a moderately up-to-date computer and Nvidia GPU. Technically, you can even run it off a different GPU or even the CPU depending on your patience for results. As GPU tech gets better, models that you currently want/need to use someone else’s equipment for now will be more widely accessible from home. You might not get 100% of the experience but you can get pretty close.

Not to be too blunt about it but a driving factor in the local AI community (local as in, ‘on your PC’, not as in the tri-county area) is porn. Every day the “updates” feed on a Discord channel I’m on has another new LLM model that’s “unfiltered” or “No holds barred”. Go on Civitai to look at Stable Diffusion models and they’re 90% for making porn. These aren’t models being produced by Google or whoever, they’re being trained by hobbyists in the AI community. Again, as tech gets better, it becomes easier and easier to train the models you want rather than relying on the corporate models. And easier to share those models with anyone who wants them. Ideas like AI being forced to “respect intellectual property” are a year behind the times. Not long ago, I wanted to emulate an art style not currently in the Stable Diffusion models I was using. I went online gathered a collection of the artist’s work, spent maybe an hour or so training a LoRA (sort of a mini-model that overlays a full model) and was in business, being able to bang out as many renders as I want.

Obviously the big corps have the advantage in intellectual resources and server farms so they’re always going to be the go-to for most people. But the cat’s well out of the bag for meaningful regulation in a broad sense. In fact, IIRC, Stable Diffusion’s response to people bitching about most adult content being out of the 2.0 set was (paraphrased) “Whatever, like you’re not going to train a bunch of porn into this anyway”. But substitute whatever you want for “porn” be it artists, writers, directions for making bombs, etc.

To survive, most people are going to have to play nice and cooperate with regulated corporate AI. Off the grid AI is going to be a trifle.

That goes back to Darren_Garrison’s comment about it. If all you want is a Chat bot to write you three paragraphs on how awesome your cookware is then, sure, who cares if it’s regulated. If you want to go outside the guardrails, it’ll be trivially easy to do so. Regulation in name only.

Is that what the companies want? Because in practice they will likely be blamed for misapplications.

Does it matter? To use an existing example, I can go find myself a 3D-printable full-auto conversion kit for a firearm right now. If I used that weapon in a crime, some people are going to blame the gun manufacturer but there isn’t much the company can do about it.

AI is basically the same, except that it’s not illegal and there’s a lot less chance of catastrophic failure on the individual learning curve.

That’s an illegal action though. I am more interested in AI being used through legal means.

So a law is passed that the people making AI must pay for content that the AI trains on. So the AI is available for a fee (it’s all going to go this way anyway) and the makers of the AI pass a share on to the content providers.

Sure someone will still provide AI that doesn’t compensate the content providers. But the results are going to be only sold on the gray or black market. Which is tiny for a product that simply mimics or knocks off legitimately legal products. Gray and black markets only thrive when they provide things not available on the legal market. Which won’t be the case here.

The internet is the content provider. How do you propose to “compensate” the hundred million people who typed the word “the” after the word “in”, thus showing an AI that “in the” is a valid English sentence fragment? Or the hundred million people who posted a photograph of their cat, thus showing an AI that cats have two round things embedded in a big lumpy thing on one end?

Why? If I take a Stable Diffusion model and train it on 500 contemporary artworks (in addition to its existing model) and generate renders with it, you’re never going to know. If I train it on 100 old Sears catalogs for generating product copy (in addition to its existing model) and hand you an AI-written catalog, you’ll be none the wiser. Especially if you have material that’s been gone over by a human editor or graphic artist, who’s to say or prove which parts came from an “illegal” AI model and which parts I tweaked because I loved reading the Christmas Wish Book at a kid?

Getty is making their own image generator.

I’m seeing these hidden image/text style images spread all over the place after being copied from AI groups, passed along as real photos that coincidentally look like something else or as hidden messages found in actual mass media. Many people fall for both explanations.

More on AI-powered mass surveillance from the Cryptography Engineering author:

However, he is making this sound futuristic or new when these technologies (sorting through conversations, classifying documents, social graph analysis, you name it and intelligence agencies have named it previously and moreover been using it) have been developed and deployed for decades. They are continuing to evolve.

The newest bot in town:

https://venturebeat.com/ai/mysterious-gpt2-chatbot-ai-model-baffles-experts-a-breakthrough-or-mere-hype/

I noticed this:
Imgur

Admission that gpt2-chatbot is theirs, or disinfo… who can say?

The thing hit Hacker News and is now being rate limited at the lmsys.org leaderboard, so I wasn’t able to experiment myself.

I am skeptical, because the big corporations, in their letter, failed to include the legal phrases “pretty please”, “no touchbacks” or “no backsies”.

Really interesting infomercial on the newest version of Chat Jeepy Tea. They are really pausing the hell out of that development…

I used to think we were close to AGI after seeing Chatgpt, but after using it for some time it’s clear to me it’s just good at faking intelligence. Observing other companies invest billions of dollars and all independently reach similar limitations has also convinced me that LLMs won’t get us to AGI, throwing more data and compute has reached its limits it seems. It’s not the shortcut that most people think it is. Next word prediction won’t get us there.

LLMs aren’t good at reasoning, and this is easily seen when you try to play a game with it. Try to play tic tac toe, connect 4, sudoku or wordle with it. It’s performance is worse than that of a child.

I’d say we’re decades from AGI, and the fears of AI wiping out humanity as though it’s an urgent threat are unfounded. People scared of the intelligence of LLMs remind me those in the past century who (foolishly) thought the ELIZA chatbot was intelligent

Microsoft wants to put a keylogger on your PC and call it AI.

https://www.axios.com/2024/05/21/microsoft-windows-11-ai-recall-copilot-pc

Sorry if this has already been asked and/or covered in this long thread, but I still have no idea what a “pause in AI development” would look like. Everyone in the world says, okay, we won’t work on AI? And anyone who did would be arrested or something? Has anything like that worked in the history of ever?

I don’t believe it was ever a serious proposal. Elmo wanted everyone else to stop working on AI so he could get his to market first and started going on about how ChatGPT was going to go Skynet in like two or three weeks to try and scare people into going along with it.