Yeah, I didn’t get the idea that the moratorium was proposed in order to get a handle on things like ChatGPT or DALL-E, so that the artists are compensated or anything like that. These guys don’t give a shit about any of that.
I had the distinct impression that there was a sort of intuitive feeling among these folks that the state of current AI was that development was moving super fast, and so was the capability of the AIs that are being developed/trained, and that the moratorium is basically a pause to put guardrails in place before something bad/crazy/unforeseeable happens. Not so much some kind of Skynet kind of thing, but more of a “let’s make sure that we have good AI monitoring that we understand, rules about how they should behave that can be enforced, and a set of rules about how they should and shouldn’t be deployed before we get much further.” sort of situation. And probably a good dose of “here’s how AIs should be taught to be good cyberspace citizens” as well.
I mean, I could totally see someone making a rogue AI that could identify people based on otherwise non-identifying information, or something that would make decisions/create content based on what it’s learned that are incorrect or hurtful because it wasn’t taught correctly.
A lot of it comes from the nature of today’s AI. Basically the developers understand the structures and the math that underlies the AI system itself, but they don’t have any real knowledge of how the AI has learned what it does.
For example, if you set up an AI, the environment and how the neural net is formed is known. But if you start teaching it to predict the weather, it’s unclear with the way that AIs work, just how they’re learning that. We don’t really have a window into their “thought” processes, so to speak, so we just have the ability to say “No, that day’s prediction was wrong by 5 degrees”, or “Yes, you did well!”, and it goes and integrates that into its internal learning and comes back with another prediction. Lather, rinse, repeat. But we don’t know how it got to its prediction from the data that it’s been fed, and the training it’s had.
So there’s a desire on the part of a lot of people to sort of pause to get a handle on how/where/why these things can be employed, in what capacity, and what safeguards we’ll have in place.