Deepfake Nudes: Everybody's naked (if you're a woman)

In the 70s, the ads in comic books offered to sell you glasses that made people appear to be naked.

I see I’ve been ninjad.
Anyway, I’m not especially worried about the nudes. Yes, we’ll have any awkward overlap between everyone knowing it could have been faked, and the fakes looking plausible, but that will be short-lived, soon, we will all know nudes are easy to fake. And… You know, every one of us is nude under our clothes. There’s nothing innately wrong about having a nude body.

I’m much more worried about deepfakes of people saying embarrassing or damaging things, or doing them.

I’m not too worried about that long-term because I was serious about the digital signing thing.

Security cameras and smartphones will soon digitally sign their recordings, possibly including some kind of encrypted handshake with a remote server.
The system won’t be perfect of course, but breaches of it would be taken as seriously as a hole being found in a major bank’s security. That is: a big deal, that would cause organizations worldwide to make improvements to their systems.

--------------------------------------------------------

I think the everyday implications are still the bigger impact.
Our faces are still incredibly important, not just for attaching us to things we’ve done or said but generally our position in society. Currently, we can change the age, race or even gender of someone, put them on a different body or just generate entirely artificial people…in images.

Once we can do all these seamlessly in video it will mean a big shake up to society.
It’s too hard to say what all the effects will be, only that they will be yuge.

I think this is what everyone, anyone, needs to be encouraged to do. I almost think there needs to be a campaign for that, because, like any topic, there will be victims of this sort of thing who aren’t aware that it can just be waved off like this.

@Monty 's link was less about generating artificial dirt on politicians than about teenagers using fake pornographic videos as one aspect of a concerted campaign of loathsome bullying/harassment, a supplement to more traditional slander (i.e., instead of scrawling “XYZ sucks cocks”, why not render it more graphically). This is, for sure, an issue that needs to be dealt with beyond just telling young, vulnerable victims simply to shrug off harassment, but I do not think the underlying problem is with the use of neural networks or the Internet.

I doubt a random smartphone camera is capable, but one can absolutely order cameras that image at the right wavelengths to see through clothes but not penetrate skin. These have scientific uses, but are also marketed for airport security scanners for instance.

We’ve always needed to rely on trust networks for “this stuff”. Even with perfect deepfake technology, photographic evidence is still far, far stronger than what we’ve managed to make do with for almost all of human history.

Humans are already good at creating fake testimony, and yet human testimony is still admissible evidence.

There are multiple technologies that can do that, but all have significant limitations, and none produces an image that would be mistaken for what the naked eye would see of a naked body.

By “trust networks” I mean formal cryptographic signing authorities as is the case right now for banking and SSL connections among other uses. This is not a thing routinely done for images, video and sound recordings at this time, but will be.

In terms of the importance of images; I don’t doubt that whatsoever, I’m saying that images will continue to be important, even though deepfakes exist, because they will typically be associated with additional authentication.

Sure, but there can be a difference of degree with this stuff.
Copying and sharing music was possible long before the MP3 era. But the fact it became so much easier was a game-changer. CDs died overnight, even before there were good streaming sites.

I remember people talking about this kind of thing when Forrest Gump came out in 1994. If you haven’t seen the movie, there are several scenes where Tom Hanks is digitally placed in footage where he interacts with famous people including John F. Kennedy, Lyndon B. Johnson, and Dick Cavett and it looked rather convincing. At least in 1994 it looked rather convincing much the same way my Nintendo 64’s graphics blew me away in 1997.

As we’ve seen since 2016, it’s hard enough to convince people of the truth when you have actual video evidence of it. I think deepfake nudes are going to be the least of our worries.

This is sort of a different thing. The super realistic celebrity Deepfakes you see are made with thousands of high resolution images of the celebrity’s face for the AI to map from hundreds of angles as the actor in the film moves around, changes expression, etc. For Tom Cruise, it’s trivial to get this raw image data; you can just fire up a high def movie and start stripping stills from it. For a co-worker or neighbor or local barista, you probably don’t have these images available just by trawling their Facebook page. You can still try to feed the AI on ten or fifty images but you won’t get very convincing results. Also, it can take anywhere from hours to days to train the AI how to accurately put Tom Cruise’s face on things and requires high end equipment (especially a high memory GPU).

The software in the OP takes a still photo and uses AI training based on thousands of clothed/unclothed models to guess what’s under their clothing. Rather than wrapping some actor’s face on another face, it determines how to draw a fictitious body. As others pointed out, it’d not going to be especially accurate since it has no way of knowing about your moles or scars or muscle tone and things like a padded bra or baggy sweater will cause it to estimate poorly but it’s not really intended to create a fake that’d fool the courts. It’s intent is just to fulfill a fantasy slightly more realistically than copy/pasting a head onto a nude photo. It also, unlike the training required for a Deepfake video, spits an image out in 30 seconds using some other computer online because it’s not restricted to being “accurate” aside from producing a moderately plausible “that looks like a naked chest” image. That’s not to defend the ethics, just saying that the technology and implications between a “real” Deepfake video and these AI generated photos are two different things.

I’m just glad that now, the whole world can see me naked. It’s the ultimate revenge on all of you!

I wrote an article about X-Ray specs that turned into a chapter in my book Sandbows and Black Lights. I spoke to the people who currently manufacture them. They no longer use feathers, as they originally did (although there’s a company in China that still does it that way). They now use diffractive filters that make the words “X-Ray” appear when you look at a distant point source (or if you shine a laser pointer through the sheet). This way they can honestly, if misleadingly, say that you can look at something and see “X-Rats”. (When you look at an extended object the diffracted lettering all merges together giving the classic “X-Ray” illusion. It works better and more reliably than using feathers as inexpensive diffraction gratings.

Obviously, this varies by jurisdiction, and I’m sure one of our resident lawyers will provide more authoritative info, but I was under the impression that, in general, audio recordings are not admissible, perhaps because they have always been easier to fake. I’m not as sure about the admissibility of video or stills as evidence.

This was my initial reaction, too, but then I thought about it a bit more.

You could film a screen or mod the camera, but you’d also have to do in the same location, or the signed location data would show you weren’t where the video was. So that only really be doable if the screen is inside the building where the person is. For filming a screen, you’d also have to align the sensor perfectly for not moire effect, and never move it. And the screen would need to be the same or higher resolution than the camera, when the reverse is more often the case these days. And, to prevent opening the camera, there could be antitamper systems that could possibly just make the camera fail, or change how it signs things to indicate that it has been tampered with.

As for keys getting leaked: we deal with encryption getting cracked all the time on PCs. We just change it. The camera would get a new signed firmware, and that firmware could be put on their website which would could be encrypted via web encryption, so you know the provenance of the firmware. And, of course, the firmware itself would have a key that also winds up in the images/videos.

So I do think it’s possible we could do this. It wouldn’t be easy and it might not be perfect, but all it has to do is be more reliable than eye witness testimony, which is ineed fairly unreliable.

I would say that a bigger one is that the camera is focused on a single 2D plane the whole time.

So a standard authenticity check can include the digital signature of the camera, handshake with the remote server at the time and location of filming, and consistency check on things like focal distances.

As with many things, it may still be possible to hack all this, the requirement is always just that hacking it be hard enough to discourage the vast majority of people from even thinking about trying.

Audio recordings aren’t inadmissible, but nearly every demonstrative exhibit (such as a picture or recording) must be “authenticated” before it can be admitted as evidence. That is, a person must testify that they produced it, or, at the least, that they can personally attest that it is what it purports to be. This, like anything being admitted, is subject to cross examination. It’s only once the evidence has been sufficiently authenticated that it can be admitted and presented as evidence.

(There are exceptions - so called “self authenticating” documents - which are traditionally viewed as inherently reliable. This tends to be rare, but an example might be a certified (e.g. stamped as a true and correct) copy of a public record).

The point, as it relates to the OP, is that the legal system has always tried to ensure some measure of reliability before pictures or video could be used. Their authenticity is not considered “self evident”.

I saw this earlier today:

How can you tell the difference between something that appears out-of-focus on the authenticated camera because it was at a distance other than the focal distance, vs. something that appears out-of-focus because it’s an in-focus picture of a screen showing an image of an out-of-focus object?

It’s not whether you know the difference, it’s that the camera itself knows the difference at recording time. (perhaps the difference is detectable from the video data itself too…I’m not a video expert but for sure the focal distance is something the camera knows). Same with lighting.

And again, as with anything, it will be possible to hack the camera software to fake such a consistency check. But remember you’re also needing to hack other systems, including remote systems, and generate fake data. Done right it could be plenty secure.

Yup. Remember when people used to say the camera doesn’t lie? Now we have some people who still believe that, and some who won’t believe any photo, and sometimes it takes more tech knowledge and time than the average person is willing to to put in to tell the difference between real and fake.