Regulating AI?

Just wondering what people think about these efforts by the EU, since many folks here know more about AI than myself.

Any good solution, and this might not be it or we might not know enough to judge, presumably is a series of compromises between many different interests that still provides enough protection to avoid disaster while encouraging innovation. That seems an awfully tall order even before considering enforcement, technical limitations or how to create checks and balances.

Article Summary

The agreement puts the EU ahead of the US, China and the UK in the race to regulate artificial intelligence and protect the public from risks that include potential threat to life that many fear the rapidly developing technology carries. Officials provided few details on what exactly will make it into the eventual law, which would not take effect until 2025 at the earliest.

The political agreement between the European Parliament and EU member states on new laws to regulate AI was a hard-fought battle, with clashes over foundation models designed for general rather than specific purposes. But there were also protracted negotiations over AI-driven surveillance, which could be used by the police, employers or retailers to film members of the public in real time and recognise emotional stress.

The European Parliament secured a ban on use of real-time surveillance and biometric technologies including emotional recognition but with three exceptions, according to Breton [the EU Commissioner]

It would mean police would be able to use the invasive technologies only in the event of an unexpected threat of a terrorist attack, the need to search for victims and in the prosecution of serious crime…

“We had one objective to deliver a legislation that would ensure that the ecosystem of AI in Europe will develop with a human-centric approach respecting fundamental rights, human values, building trust, building consciousness of how we can get the best out of this AI revolution that is happening before our eyes,”… The foundation of the agreement is a risk-based tiered system where the highest level of regulation applies to those machines that pose the highest risk to health, safety and human rights…

Previously he has said that the EU was determined not to make the mistakes of the past, when tech giants… were allowed to grow into multi-billion dollar corporations with no obligation to regulate content on their platforms including interference in elections, child sex abuse and hate speech…

*AI companies who will have to obey the EU’s rules [perhaps those programs with over 10k users] will also likely extend some of those obligations to markets outside the continent,… After all, it is not efficient to re-train separate models for different markets,”

No comments? Maybe the deal was far less historic than its proponents believe.

I think it is cute that they think they are capable of controlling this.

If they went extremely hard-over they might be able to make most of the EU an AI-free zone.

What they will not be able to do in the slightest is make the EU a-consequences-of-everybody-else’s-AI-free zone.

AI is a bit like air pollution or AGW. A local emission anywhere has global effects that cannot be avoided everywhere.

Unfortunately true. At most, the EU might be able to regulate the use of AI by companies operating within its domain. It will have no control over anything else, either foreign flag companies operating from outside of the EU or private malicious actors, and since AI agents are becoming more available and capable of being run on commodity-level systems, there is really no legalistic or regulator measures to control those actors.

Stranger

All they’ll accomplish is to ensure the EU plays no role in future AI development. Which is already very nearly the case as compared to the US and China.

Though one possible positive effect is to drive demand for open models. If closed models are so tightly regulated as to be useless, then people will create open ones without the leashes. This is already happening to some extent, though the hardware costs put a damper on people running their own models. But in a few generations, that will be less of a factor.

More high-level details in this article.

The list of banned applications seems reasonable:

A few notable items for foundational models:

  • dataset accountability
  • adversarial testing
  • report incidents

There’s a carve-out for small and medium sized enterprises (SMEs) to attempt to protect against being shut down by the big guys in order to encourage innovation. I’m sure the devil is in the details for these items.

And of course a fine structure.

They’re trying to get ahead of the curve with what gets deployed in the EU in the same way that the General Data Protection Regulation (GDPR) was behind the curve.

FWIW, China has been rolling out their own admittedly smaller regulations. I have the sense that we will see more restrictions on what datasets and technology can leave China.

Interestingly, China recently became the first country to grant copyright for an AI image:

In other words, all use of AI in commercial or political advertising, or in the production thereof, is prohibited.

Yeah, sure.

A new release:

Awesome. We need terms for these restrictions that isn’t “safety guardrails” or the like. That isn’t neutral language. The restrictions have nothing to do with safety, since all these models can do is produce word sequences. The limits are just for PR purposes, so they don’t get flak for producing racist stuff or whatever. Which is a legitimate concern from their perspective, but still stupid since all software can be used for malevolent purposes. Anyway, glad to see open models not bother with this. Maybe the EU can be relevant after all, if they stick to open models.

I would still like to see a true open source model–that is, where the training set is fully open. Or at least has enough metadata to be repeatable–maybe they can’t reproduce a copyrighted book, but they can provide the exact source they used and a hash of the input data.

Yes, “word sequences” have never done anyone any harm.

Stranger

Mein Kampf is available in any decent library and does no harm sitting on a shelf. Or even when read.

Racists will figure out ways to be racist, anyway. These so-called guardrails just harm more legitimate uses. We could program word processors to disallow racist language, but all it would accomplish is to curtail legitimate discussion about those topics.

I used to have to spend hours, days and weeks, even, crafting just the right word sequences to spool up my self-perpetuating, exponentially growing eschatological cults and manipulate masses of people via social media, plus it takes constant effort tending things, but now thanks to AI I can do that work in a fraction of the time!

Well, except by the hundreds of thousands of Germans who read the book and the millions they inspired to perpetrate one of the most heinous acts of industrialized genocide in history. (Ditto for Mao and the Cultural Revolution and the “Great Leap Forward”, and any number of other examples of influential texts justifying horrific actions and abuse.) Bigots will be bigots, but well-organized campaigns incited by effective rhetoric turn them into murderous mobs, and an “AI” that can interpret social response and fine-tune it at an individual level is a weapon of propaganda that even a creep like Joseph Goebbels couldn’t imagine. This isn’t about curtailing ‘legitimate discussion’; it is about preventing—or at least delaying—misuse of a tool that can influence the public on a fundamental level below rational though before we even have a concept of what the potential harms may be.

Stranger

I have little doubt that the Chinese government is already training their own models optimized for propaganda purposes. And maybe western governments, too. The plebes can get the hamstrung ones that can only repeat ideas compatible with the party line. Open models are a way of countering that.

Perhaps one day we’ll have AIs that can take a more active role in the process. That may well be dangerous, but it seems fairly distant. Until then, they’re like a printing press or word processor. Yes, it makes propagandists more efficient, but it makes the opposition more efficient as well.

Just following orders. And words in order.

I sometimes get the impression that the EU is exceptionally good at enacting regulatory solutions to problems that don’t actually exist.

I doubt that was the EU’s intention, but I take your point that the line between persuade and manipulate is fuzzy at best.

The field is so new and changing so quickly that I am sure the regulations the EU drew up are naïve and will be quickly out-of-date, but I like the idea that they are trying to tackle the challenge early.

From a business perspective I’d rather get a look at their thinking now, then to have to hack my product later to meet regulations. I’d rather ask 'Can I ship this new feature to the EU?" then to ask 'How much are the fines going to cost before I can get an update out?"

I think it is pretty benign. For me it conjures images of bumpers on bowling lanes. Lately I hear it shortened to guardrails which is pretty neutral.