Is it unethical (for AI) to invent fake people?

Wondering about Dopers views on this article. Since my track record of posting paywalled stuff is pretty good, a limitedexcerpt follows in the next post.

Excerpt

…from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself. Before it’s too late (it may well be too late already) we must outlaw both the creation of counterfeit people and the “passing along” of counterfeit people. The penalties for either offense should be extremely severe, given that civilization itself is at risk.

It is a terrible irony that the current infatuation with fooling people into thinking they are interacting with a real person grew out of Alan Turing’s innocent proposal in 1950 to use what he called “the imitation game” (now known as the Turing Test) as the benchmark of real thinking. This has engendered not just a cottage industry but a munificently funded high-tech industry engaged in making products that will trick even the most skeptical of interlocutors. Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future.

What is the harm that has not been with us all along from counterfeit people? What difference does it make if the voice of a counterfeit person is produced by a machine or an author, philosopher, or a fraud? It’s easy to get people to pay attention to any story that refers to AI no matter how nonsensical it is. I expect to see headlines about AIs impregnating women soon.

I dunno, Man. Somebody posted a link to “This person doesn’t Exist” or some such thing, showing pictures of AI generated people.

Got me to thinking, I haven’t seen 99.9999999996278 percent of the worlds population. How the fuck would I know? Why the fuck should I care?

I went and bought a car today. I met the guy, shook his hand, talked to him. Not got Butt-fucked by AI today. He looked just like somebody on that website. But had a silly hat. This all seems like Y2K hysteria.

Most people follow the crowd, or what they think of as “their” crowd.

Now a single mastermind or corporation can conjur a crowd of a hundred thousand fake people who each seem on examination to be 100% real. And who will all think and act exactly as the Bog Boss wants. Thereby causing the real people to think, act, buy, and vote as the Big Boss wants.

This seems … dangerous … to true democracy.

This already happens, the Big Boss just hires a bunch of kids in Romania to emulate a hundred-thousand fake people instead. Look into the 2020 election Twitter-bot farms and all that nonsense.

From the article:

As Harari says, we must “make it mandatory for AI to disclose that it is an AI.” How could we do that? By adopting a high-tech “watermark” system like the EURion Constellation, which now protects most of the world’s currencies. The system, though not foolproof, is exceedingly difficult and costly to overpower—not worth the effort, for almost all agents, even governments. Computer scientists similarly have the capacity to create almost indelible patterns that will scream FAKE! under almost all conditions—so long as the manufacturers of cellphones, computers, digital TVs, and other devices cooperate by installing the software that will interrupt any fake messages with a warning.

This feels like working backwards. Watermarks on cash work because it’s hard to fake the watermark and thus make it look like real cash. But this would be like all real humans needing a watermark so AI couldn’t pass as them, not the opposite. If “all” AI are marked, then all you need is to make an AI without the mark to not only act like a human but now bear the “evidence” of being human by not being watermarked. It’s pretty easy to develop an AI that doesn’t mark itself (we do that now).

Granted. The difference is one of cost vs scale. Right now a thousand Romanian teenagers can badly impersonate 10,000 Americans for a bunch of money. In the near future one PC can accurately impersonate 500K Americans for negligible money.

That will alter the terms of the tradeoff a bunch.

Here’s a comparison.
Fake, or at least exaggerated or one-sided, news reporting has been with us since the “yellow press” of the 1800s. What Newsmax & Faux have done is orders of magnitude more capable and damaging.

Partisan propaganda AI will make Newsmax seem quaintly twee.


Not that I have any expectation that anything can be done to deliver the regulation the OP and their cites call for. Due to the inherently international and ungovernable nature of the internet, we have re-entered the bygone era of general lawlessness in heretofore civilized countries.

Hollywood had that covered way back in 1977:

To answer the titular question: If a fake person is not clearly marked as a fake person then, yes, it’s unethical. But if it is clearly marked that it’s a fake person then it’s okay.

What to do, to police it? No idea. Probably the answer is to encourage people to get off their butts and off their phones and start spending time with real humans, again.

This would also be my answer for teachers who are afraid of kids using AI to cheat. Take away their devices and have them do their work, in person.

For certain values of “a bunch of money”.

It might be cheaper on AI – You can rent an A100 for around $3-$4 an hour but you’ll also have to pay someone to set it up and you’ll need to keep it running so its more convincing than 100,000 Americans suddenly replying at once on Facebook. But cost isn’t really the barrier at present. Plenty of people in poorer nations willing to do the work for very cheap.

By my calculation, that works out to you having seen just a glimpse of one person’s big toe.

I was intrigued by this “Bog Boss” fellow, so I plugged “all think and act exactly as the Bog Boss wants” into a few AI configurations. He seems pretty bad-ass.

More versions.

Great typo of mine. Thanks for the kidding. Although considering the quagmire that Ukraine has become for one notable Big Boss, renaming him the Bog Boss might be appropriate.

I have no knowing exposure to AI’s yet. Just not my hobby. Based on your samples I don’t think they’ll fool too many Americans until they learn to spell and use grammar a little better.

Those pics show that the big boss is clearly out standing in his field. Except when he’s out standing in his swamp.

Well, someone has to drain the swamp.

Trump, despite his wrestling career, didn’t really do that. Maybe The Bog Boss Man can?

This quote from the article is completely wrong. It’s not at all difficult to counterfeit the EURion Constellation, which is just a bunch of circles in a particular orientation. The purpose of the EURion Constellation is that almost all photocopiers recognize it and refuse to copy anything that contains it. (Try photocopying a banknote and you’ll see.) It’s not used to recognize legitimate banknotes. If you successfully counterfeit a Constellation (which is pretty trivial), all you’ve done is produce something that can’t be photocopied.

Is that what that was? That’s a relief.

I mean, if you saw your 0.03 of a person through a hole in bathroom stall, probably not. But I thought I’d give the benefit of the doubt.

I don’t really want to see what AI might do with “Schrödinger’s Stall”.

Of course you completely misunderstood the point of the quote.

It’s not about the EURion as a bunch of printed circles. It’s about the EURion and the copiers / scanners and the laws as a collective system that can reliably prevent creating counterfeit money via electronic duplication.

Harari is arguing for some conceptually similar system that can watermark either the real world or the fake world and is coupled to a practically unhackable discrimination system we can all rely on, plus a reliable enforcement mechanism to ensure watermarks are always applied where needed and never elsewhere. That’s the model EURion supports.