Regulating AI?

Maybe the problem isn’t what we think it is. All those “Accept cookies” buttons that the EU mandated surely caused computer mouse buttons to wear out that much faster… and Logitech is a Swiss company. Coincidence?

I’d argue that the EU isn’t good at enacting regs against problems that don’t exist. They’re good at enacting regulations that are expensive and difficult for profits-first-and-only businesses to comply with. So they tend to be ignored, or worked around into uselessness. Essentially business is saying “We refuse to comply, and you can’t make us.”

The goal intent of the cookie regs was to eliminate all instances of any website tracking anyone anywhere. By the time the lobbyists got done, we have regs that provide little real citizen benefit, increase inconvenience, but make a convenient laughing stock for anti-regulatory types to point at.

All because the intent of the regs would be too hard on profits. So intent is utterly ignored and malicious specific compliance is substituted in its place.

I think the fatal flaw here is the Arms Race. The military of every major nation is striving to get a big enough advantage in AI research and application so as to dominate the world scene much like the United States has for so long. They are not going to even pause in their efforts, and that research and application is obviously going to affect all other aspects of the human condition other than just pure military applications. One can only hope that we remain relatively equal so that, much like nuclear war, no one will want it to happen.

Not all of the prohibited uses is specific to AI, and they’ve been ongoing for years. The cat’s out of the bag. Like nuclear weapons, anyone who wants one already has one, whether or not there’s a treaty (e.g. North Korea and Israel). Remember newspeak in 1984? We already have it. One day some government will have the ability to massively change all internet content to rewrite history. So while I agree that ideally there should be limits, in practice it’s (1) too late, and (2) not enforceable. We’re doomed.

Now California is protecting “brain data”. Surely that was already part of one’s “biographical core”, but it probably is a small step in the right direction.

Will it make any difference?

How far away are we from public, personalized “Blade Runner” ads?

Sorry, meant to post the (gift) link in the NYT which inspired the post. The original link is below too.

https://www.nytimes.com/2024/09/29/science/california-neurorights-tech-law.html?unlocked_article_code=1.OU4.vkb7.oAM5DQmHHwtM&smid=url-share

https://www.nytimes.com/2024/09/29/science/california-neurorights-tech-law.html