What to do about the Deepfake technology?

The BBC have an article on Deepfake here.

I’ve not seen any, so can’t comment on the quality or lack thereof, but I do remember the furore over the bad superposition of Lena Hedley’s face on top of a body double in GoT. (NOTE I have yet to watch GoT, so don’t spoil it for me!)

Now the primary current use seems to be porn. But on the positive side it will have a beneficial impact on films, especially with stunt people. On the negative side, if video can be faked well, why not fake a speech or interview of some potentate with malicious intent? Perhaps something like a video of Kim Jong Un appearing to say how much he hates communism? And if this could be done in real time… imagine a live broadcast of some famous scientist predicting a meteor impact or the finding of life elsewhere - except, of course, it’s an actor Deepfaked to look like the famous scientist.

So what should we do?

Develop better critical-thinking skills and be less obsessed with celebrities.

Try not to be gullible … if a serpent offers you an apple to eat, think about the consequences first … before you eat the apple …

This is not a new problem …

Similar to the thing in John Brunner’s Stand On Zanzibar. Its a gadget that fits on your TV that changes the ethnic identity of the characters portrayed to fit the viewer’s preferences. So, you tune in to Friends, they are all black if you are.

Probably find a use to substitute oneself for any given movie character. Like, the cable install guy goes to the sorority house for Kappa Delta Slut, with predictable consequence, and he looks just like you! Even accurately depicts one’s…massive endowments. Also, as close as you’re ever going to get to being knob-gobbled by a voracious pack of teenage hotties.

There would be many other applications, I’m sure. That one just sort of leaped to mind. No particular reason. Nothing perv about that. Maybe a little,

You mean you didn’t know they air brush out all Cindy Crawford’s stretch marks?

Even if it’s not to create video that you’re supposed to believe, it can be used to create videos which cause you to doubt the real thing. For example, say that someone has made a kompromising video of a powerful person and is threatening to release it. This powerful person could kommission the production of many deepfake videos which are similar to the real video and flood the internet with them. Then if the real video ever really does come out, it will get lost in the sea of fakes or be dismissed as a fake itself.

Very common on internet message boards …

There’s already been plenty of talk about the difficulty ordinary people have in distinguishing between real and fake news during the past few years. If anyone with a computer and average skills can create, say, a video in which Donald Trump and Nancy Pelosi announce their plan for distributing heroin in public schools, and if it looks indistinguishable from a real video, it’s easy to imagine that the problem will continue to grow worse.

Please note: this is note a personal slam, Bryan, but I’m trying to show why I don’t think “developing better critical thinking skills” is necessarily an option.

Take a look at these pictures. I can tell you that one of them is fake. Can you tell me which one just by looking at them?

https://video-images.vice.com/articles/5a25ae0af1258e32737bace5/lede/1512419093807-Screen-Shot-2017-12-04-at-32425-PM.png?crop=0.6603174603174603xw%3A1xh%3Bcenter%2Ccenter&resize=1250%3A*
ETA: I have no way to link to the video without a spoiler appearing, but I’ll do it anyway: https://www.youtube.com/watch?v=9VC0c3pndbI so y’all can see the early stages of this tech. The video was released not quite 2 months ago.

Here’s another that shows day and night: https://www.youtube.com/watch?time_continue=9&v=Z_Rxf0TfBJE

It depends on how good the fake is. If it’s just a video on the internet, and it is just amatures looking at it, then it would be critical thinking skills to evaluate the probability of the veracity.

I do think it would be very hard to produce such a video that would not have detectable artifacts that would be easy to find if you are an expert using the right tools.

How diabolikal.

I’m going with “not a new problem”.

Every era has had to deal with the issue of unfakeable authentication. Royal seals, signet rings, expensive stationery (like vellum) all made the process of authenticating the genuine will of the King sufficiently secure for functional purposes, even though the possibility of forgery existed. Part of the proof of authenticity was not just the bells and whistles attached to the document itself in isolation, but the context of the document. For any Royal proclamation, there are likely to have been discussions and prior correspondence in the history of which the decree in question is embedded.

And there was always the medieval equivalent of blockchain: if you receive an apparently odd decree, you can send a trusted person to ask the King if this is real or not.

Transpose those ideas to the modern era. In the arms race of authentication technology, we have for a short period lived in an era where video did not seem to be convincingly fakeable. If they become so, then we will have to resort to older skills in assessing authenticity: source, inherent probability, checking surrounding info to authenticate, etc.

Thus, if we have a video of The Donald and Chuck Schumer announcing to the media that they were running away together, then we can check diaries to see what they were doing when the video was supposedly made; we can check with those with who were supposedly there, etc.

For a long time now it has been possible to make convincing fake still photos, but the sky hasn’t fallen. There was a transition period prior to which fake photos had been unthinkable. Naivety about the supposed unfakeability of photos led to the embarrassment of Arthur Conan Doyle over the Cottesley Fairies. But if photos were to emerge of The Donald blowing Prince Harry to get a wedding invite, no scandal would arise. No-one would believe it. It would fail context tests - when did this happen? What do we know those two to have been doing at the time? Who was the photographer? Why was he allowed to be there? And so on.

Re the prospect of drawing the teeth of scandalous real videos by flooding the world with fakes - did that happen with photos? I don’t recall such a thing.

As I say, we have been briefly spoiled by the illusion of unfakeable video. But the arms race goes on. World keeps turning.

There is a new aspect to this problem in that until very recently, it simply was not possible to “flood the world” with something without massive amounts of manpower and money and physical product. That is no longer even remotely true: one person can “flood the world” with something for no cost at all.

Here’s the problem … I assume they’re both fake … it’s videos on the internet, why on Earth should I believe either one? … don’t you remember the controversy some years ago when ABC News paid a bunch of Palestinian kids to throw rocks at an IDF position … THAT was some amazingly dramatic footage that led the following night’s newscast, big rating jump, all completely staged …

“Believe none of what you hear and half of what you see.”

Some truths from yesteryear are still true today … and will always be true … better critical thinking is the only option … having Congress pass a law regulating it would only be worse

It’s simple to just check the source … never heard of them, then don’t trust them … this is why good journalists try to always make sure they got the story right … they live and die on their reputation … The New York Times tries to be reputable, they endeavour to never publish bullshit stories for this very reason … they need to be believed, and believed just because they say so … they have to be right every time or they lose that trust, and subscriptions …

This fake crap proliferates simply because there are stupid people everyplace …

It’s concerning to me, but I don’t know what we can do about it, other than try to get people to have better critical thinking skills and support good journalism and fact checking that would call out fakes. The technology is still relatively new, but I can imagine it getting a lot better. This article has two short non-explicit clips of Jessica Alba and Daisy Ridley with their faces on porn performers, and they look completely real. Of course those are just very short clips, and if I watched the full videos it would be easy to tell that they weren’t real, maybe from glitches but definitely from knowing that neither actress would be in a porn clip. But there are also a clip and a screenshot where the face-swapping terribly messed up, so it’s still not super easy and amazing.

I don’t know much about video editing or examining videos, and I tried to find more about detecting fakes. This page has some tips about detecting fakes, but also talks about how it’s not that hard to figure out most of the fake ones now, but also how the technology will get better and it will be harder to detect. And from the Vice article:

I could definitely see it being used as a propaganda tool in the future. Even if it makes something that could be debunked without too much trouble it could still cause damage.

Also, it’s mainly used for actors and actresses now because there is a lot more footage of them that can be used, but in the future it could be used on regular people. Someone wouldn’t need to have real nude pictures of an ex to put on a revenge porn site, they could use videos they’ve had from their iPhone and make something.

What? … just one person can flood the internet with fake crap … but there’s a hell of a lot more to the world than cyberspace … I agree that what we see on the internet isn’t reality … people can fake all the time here on …

… oh …

Say, kinda hate asking this, but you don’t honestly think I’m a wolf typing this in … do you? …

You’re talking about something completely different than what so-called Deepfake does. Yes, anyone can lie about the context of a shot.

Deepfake lies about the shot itself.

In your example, I assume the video really showed kids and they were really throwing something that looked like rocks at what appeared to be vehicles; is that an accurate description of the video? A Deepfake video wouldn’t require the kids or the rocks or even the vehicles.

Your assertion “I assume they’re both fake” is a cop-out: I told you one of them was real. Your assumption has no basis here.

So go ahead and tell us which pic in the my first link is real and which is done by a bot, then.

The whole point is that it won’t matter how smart you are; there won’t be any way to tell the difference between a real picture (or video) and one completely or mostly fictional.

And yes, they’ll be adding spoken words soon enough (Adobe’s Voco is getting very close to being able to put words in your mouth without most people noticing and there are other similar pieces of software in development, like Lyrebird).

Again, I assume they’re both fake until proven otherwise … let’s look at the source … someone who wants me to believe his parents named him “Snowboarder” …

People have been photo-chopping still images for years now … it’s not really news that it can be done with video now …

You said critical thinking wasn’t an option … pray tell … what do you suggest … a government regulatory agency to monitor Adobe’s software to insure nobody will buy it … every generation has their own Pandora’s box, King Richard I of England had Pope Gregory VIII, LBJ had Westmoreland, today we have Photochop … if we can find the next big thing, we’ll be rich …

For a system that is as far as I can tell weeks old the results are incredible. This is absolutely SFW, someone put their wife in a scene of Get Smart with Steve Carrell.

. .
_