New Jersey Passed A Law Against Deep Fakes

First, thanks for clarifying that.

For a few months, I had severe anxiety unless I responded to a post immediately. This led to a lot of problems. I am making a conscious effort to think before I post. Other things take up my time (calls come in because I work at a virtual call center. I chat with my Gobhi throughout the day on messenger. I read various other websites.) It has been a busy week. But, once again you have a point.

I no longer think it is a terrible law. But I do still think it is a bad one. Other than seeing if voters approve of such a law, I’m not even sure it will accomplish anything. Unless you create and/or use a deep fake as part of another crime already on the books, you are in the clear? Whether you support the law or not, it does not seem to do much. Except, that it may be the start of a slippery slope. Sometimes that happens. Sometimes it does not. For example the ban on assault weapons and sawed off shotguns has not resulted in a ban on all guns.

That actually concerns me a lot more than the law. If people believe everything they see or hear is true and factual, that is a huge problem. You said “teens”. Yes your average teen does and says all kinds of stupid things. But accepting computer generated images as reality without any testing or even any doubt? I find the thought genuinely frightening. It seems everybody would be better off with some kind of deep fake education course in school. I’m not joking. The class would not present anything that could be deemed pornography. It would teach students to ask a lot questions, get the facts etc. Heck, when I was a kid either HBO or Cinemax did a half hour special for kids all about the many ways companies lie to you in ads. The course would be like that but updated for the age of the internet and aimed at teenagers.

I don’t really disagree. One place where it would help would be in cases where someone is using AI to extort or harass someone or situations like a recent case where a cheerleader’s mom made fake media of rival cheerleaders smoking, drinking and posing nude to get them kicked off the team (which briefly worked). In these cases, an add-on charge feels okay to me and might have a deterrent effect. I could also see its value in election law cases (faking media of opponents, etc)

It’s a little weird to me to see Francesca Mani name dropped into the governor’s press release. She is a woman who, a couple years ago at age 14, was called to the principal’s office with a group of other girls and informed that nude images of them were circulating the school. The boys involved were lightly punished, if at all, and it was a chore to get the images erased from social media. She has since become an advocate for laws regarding this stuff.

She’s quoted in the release as being happy about the law and I’m sure she’s legitimately gratified to see forward movement, but this law wouldn’t even apply to her situation by my reading. The images weren’t created to harass her and weren’t sent to her or part of some attempt to get people talking about her. Perhaps there could be an underage component though state laws on what constitutes CSAM when it comes to AI are largely untested at best. But, again, I’m sure she’s thankful to see someone giving a shit and trying to work towards a goal even if this doesn’t single-handedly fix the issue.

But morally?

Because it is.

Sexual violence means that someone forces or manipulates someone else into unwanted sexual activity without their consent.[snip]
Forms of sexual violence[snip]

  • Showing one’s genitals or naked body to other(s) without consent

Cite

If you actually understood it, you would be for this law and all other laws like it. So no, I don’t think you understand my side at all.

Just a quick reply as my ride will be here any moment-

That doesn’t follow. If somebody genuinely think that life begins at conception and that abortion is murder it is simple to understand their side. That does not mean I agree with their side. It just means I understand it.

That part you could format a long reply to, but not the plain question at the start of the post?

And just FYI, I have zero interest in why you do or don’t post when you do, so the running report of your daily activities is wasted verbiage (in replies to me, it may fascinate your other respondents).

I think we’re using “understand” in different terms here - you’re using it in a “know the meaning of the words” sense, and I’m using it in a “be sympathetically aware of the nature of” sense. I will never understand pro-lifers. They may as well be aliens to me.

So just to be clear - you understand (in your usage) that I think deepfake porn is sexual violation, but you don’t agree it is? Despite that being exactly what victims of the violation say?

Yes, exactly. Though, I wouldn’t describe my last post as a “long reply”.

I wasn’t attempting to fascinate any one. I have seen posters accused of stalling and avoiding replying. I was pro actively addressing any such claims, by anybody, that I was doing so.

Yes, that is essentially the situation.

In the case of deep fakes, it isn’t any one’s actual genitals or naked body. It is a computer generated image. Showing an actual image of a person’s genitals or naked body would be ‘revenge porn’ , not a deep fake.

In the case of the National Sexual Violence Resource Center, they also say

We use “Deepfake pornography” as a term to describe a form of image-based sexual abuse. Not all forms of pornography are consensually made, and all non-consensually made/distributed pornography is a sexual violation that has no place on the internet.

So “But it’s not their real bodies” doesn’t seem to make the difference to them. Sharing non-consensual intimate images is the same in their view regardless of origin.

So - I want to be absolutely clear here - you do not, in fact, believe the victims?

So, how, exactly, are you

Did you actually read that woman’s account in the article (or any other deepfake victim’s account) as to how that didn’t actually matter to how violated they felt?

And you replied at length in detail again, but still didn’t answer the initial question in that post:

I’m not quite sure what you mean. I believe deep fakes of them were distributed or posted online. I believe they were harmed by this.

How exactly am I not?

Yes, I did. If that is what you meant, why did you cite a definition that didn’t include that?

There are a vast number of things I personally hold to be morally wrong. There are a vast number of things I feel should be against the law. For me- is it wrong? Should it be illegal? Are the only two useful or meaningful questions. We all agree that deep fakes are immoral. We all agree that they can do harm. I do not believe they should be illegal. If the study Jophiel posted earlier is accurate, the public and especially minors need quite badly to be taught to question what they see and hear, and to know truth from scam, phishing etc. This would address multiple issues, without the need to criminalize anything.

If the answer to ‘should it be illegal?’ is no, I don’t see the point in asking ‘Just how immoral is it?’

But you don’t agree the harm was sexual violation?
I mean, you said this when I asked you that:

What the hell kind of harm is it, if not sexual violation?

Because you don’t agree that victims of sexual violation were sexually violated.

But it did.

What the hell other criteria do you use to decide whether something should be illegal? If we all agree on the immorality, and it does harm, why not make it illegal?

Wait, you decide whether or not it should be illegal before you decide whether or not it’s immoral? That’s quite ass-backwards.

As I also said, it is technically sexual violation.

Again, I agree that technically they were.

No, it did not.

Unless we stretch the definition of “one’s genitals or naked body” to mean ‘computer generated images purported to be one’s genitals or naked body’.

Jophiel provided a definition that covers deep fakes

For a few reasons. The main ones here are that I don’t see a law actually doing much if any good to fix the problem. I also see education as a better approach that would fix the problem and several other problems at the same time. Some of those are already illegal.

Ah, it appears I was unclear. I ask the question of morality first and then ask the question of legality. If I have decided, for whatever reasons, that something should be legal then as I said, I don’t see the point in asking just how immoral it is.

So when I asked you

and you said

You actually meant to answer “No, I do agree it is sexual violation”? Even with the ‘technically’ weaseling?

Can you maybe just give straight answers to questions?

It’s not a stretch, it’s acknowledging the awful reality of it. It’s real to the victims, so we treat it as real.

How would crimializing it not reduce the problem, even if it didn’t completely eliminate it? At least some of the perpetrators would not be offending, then.

And when has education ever worked to fix this kind of rape culture behaviour? Especially since it’s not just schoolkids doing it, it’s grown-ass adults.

Sorry, I do not mean this as an insult, we seem to be having trouble understanding each other. This includes when we use the word “understand”. What I meant was, we both agree that distribution of deep fakes is “sexual violation”. However, I strongly suspect we are using different meanings for “sexual violation” as well. Before anybody quotes a dictionary, we agree on the denotation. I doubt we agree on the connotation.

It is a stretch because it is not plainly stated in the quote you posted, and was not implied.

Regardless of anybody’s views or positions on whether deep fakes should be on that list, deep fakes are not on that list.

Deep fakes can now be made quickly and easily. Unless you e-mail them from an e-mail address that can be traced to you (I genuinely have no idea how e-mail providers treat this. If content is illegal in the US, I would think they would cooperate with authorities but still require a warrant to keep from losing customers. If only a state law is being broken, I don’t know what they would do), or use your smart phone, it is a simple matter to post deep fakes on public sites anonymously.

Again, we have a failure to communicate. The education I proposed would not be ‘deep fakes are bad and here is why you should not make or distribute them’. It would focus on ‘Here is how to know if a text, e-mail, web site, image or video is real.’ This would not make the creation or distribution of deep fakes any more difficult. It would make passing them off as the real thing more difficult. Instead of people saying ‘That Jenny is such a slut! Look at this picture of her!’ and snowballing from there, the kid claiming the deep fake is real and starting the rumor would be met with ‘This is not real! How could you expect me to fall for this? Why are you trying to start a rumor? What’s wrong with you?’ and so on.

If every body knows deep fakes are fake, they lose a considerable amount of power. I suggested educating minors for several reasons-

Just a gut feeling that teen boys are the most likely to make and distribute deep fakes of teen girls they know.

Most of them are already attending public schools for education anyway. It would be logistically simple to add another course.

They can still be reached. I am unsure at what age or why so many people seem to accept that every scam attempt is real. But, it does happen. Ideally we should teach critical thinking in public schools. That seems impossible. We can at least teach skepticism and how to spot scams and fakes.

My definition was from the same site as MrDibble used. Given that pornography is pretty much defined as the sexual display of genitals and/or sexual behavior, it’s safe to think that the National Sexual Violence Resource Center includes AI depictions under “Showing one’s genitals or naked body”.

You suggested this in your OP and I still strongly disagree.

Thorn did research on teens regarding deepfakes and found that:

Yet, misconceptions persist—some still believe these images are “not real.” 16% of teens either dismissed the threat outright or believed it was context-dependent. Among those who saw no harm, the most common justification (28%) was based on a perception that the imagery was “not real,” revealing a dangerous misconception that overlooks the emotional and psychological toll on victims.

So 28% of 16%, or around 5%, of teens thought that deepfakes weren’t a problem because they weren’t real photos anyway. That’s a very small number. Even if you want to take the full 16%, it’s still a small number. It’s a small number because most people realize that the trauma, shame, bullying, ridicule and fear associated with being targeted by this isn’t dependent upon people believing that it’s a bona fide photo. Sure, if everyone thinks it’s real, you might somehow suffer worse (being thrown off a team, social issues, etc) but “Well, it’s not real” isn’t remotely the panacea you present it as.

And, again, the point of making and sharing these images isn’t necessarily (probably not even often) to try to “prove” that the victim was in that photo. It’s to objectify and mock. “Here’s what Cindy looks like naked” doesn’t have to be “This is a legitimate photo of Cindy naked” for guys to process it as “Yeah, that’s gotta be pretty much what she looks like”.

In terms of mocking and taunting, sex-based rumors have been used as a weapon for ages and validity has never really been part of it. I assume most schools have seen some variation of “She put peanut butter on herself and had her dog lick it off” or “She had to go to the ER because a hotdog broke off inside her” (I once had a conversation that somehow veered into this and everyone was “Oh, shit, we had that in our school…”). Few people actually believed that the target had to be admitted to the hospital, it was just a tool to sexually mock them and make them feel like shit. When has “proving a rumor couldn’t be true” EVER worked for a bullied target in school? It doesn’t work because fake validity isn’t the goal.

More sinister, there’s a current of “I can use my programs to make Cindy do whatever I want” as a perverse objectification. I can make her naked, or in lingerie, or in various sexual positions or whatever. Stuff where you’re faking the victim into a professional video isn’t because you think the world will believe that the victim was in a professional porn shoot, it’s you wanting to put her depiction into that without her consent. Shit, probably getting additional jollies off the fact that you CAN “make her” do those things.

Whether or not it’s “real” often has little to do with it.

Yes, it was.

So, they mention deep fakes elsewhere explicitly, but not in the list he quoted.

I disagree. If they meant (including deep fakes) they could have and should have said so. They did not.

Then you are corect and I have missed something.

This just leads to more questions why a law against deep fakes is needed. The only significant difference between a deep fake and a really crappy cut and paste image that could have been done in Photoshop thirty years ago is that deep fakes can pass as actual photographs. If the concern is not that they can pass as real, why criminalize them? How is a deep fake that is distributed as a deep fake different from a drawing that is recognizably the victim? How is it different from a bad cut and paste job in Photoshop?

Both in a moral sense and a legal sense- how is it different?

I have no problem seeing a continuum of harm between “Stick figure” and “Photorealistic depiction” so I don’t know what to tell you if you can’t.

Oh I can see a contunuum. But, it’s largely a matter of art and aesthetics. If a crudely drawn image with an arrow and the words “Amy from math class” is legal, even though only the intended audience will know exactly who the victim is, at what point and why does an image need to be criminalized?

A badly made cut and paste made in Photoshop would have the victim’s face accurate, as it was taken from a normal photo. Anybody who had seen the image would recognize the victim’s face. Why isn’t that being criminalized?

An analogy (I admit it is not a perfect analogy. Nor do I want to hijack the thread into a second ammnendment debate) I can legally own and possess a shotgun. I cannot legally own or possess a sawed off shotgun. It has been explained to me that while a shotgun has many legal uses, the only reason to saw down the barrel is to kill human beings.

So in the eyes of the law- how and why are deep fakes different?

Morally speaking- if the intention is not to decieve viewers, but simply to mock the victim and cause them harm- how exactly is a deep fake different from a drawing done in crayon, or a cut and paste in Photoshop? OTTOMH a crayon drawing, with or without names written on it, is recognizable to only a small group of people. If you post it on the internet, nobody will know who the victim is or be able to recognize them on the street. With badly done Photoshop, people viewing the image on the internet will not know the victim’s name (unless text is in the image or the image title gives the victim’s name) , but they will be able to recognize the victim if they pass them on the street. There are difference between bad Photoshop and a deep fake- now with the right software anybody can do a very convincing deep fake. It takes significantly less time and effiort to create a convincing image.

But if the intent is not to convince people that the image is real, why does it matter how convincing the image is? I can see a badly done Photoshop being more hurtful than a crude drawing. But, how is a deep fake more hurtful than that? For that matter, if the goal is to harm the victim and not to claim the image is real, how is a deep fake more hurtful than a skillfully executed drawing?

Agree to disagree.

Again, agree to disagree - as far as I’m concerned (and as far as the organization who made the list is concerned) deepfakes are on the list.

What does that matter as to whether criminalizing it would reduce it?

Wait, so you’re talking about educating third parties? How is that going to stop the perpetrators?

I thinkj you’re mistaken as to what the point of deepfakes are. It’s certainly not verisimilitude. It’s sexual violence, plain and simple.

You need to get past this “but it’s not real” thing.

That’s so completely besides the point that I feel you’re just not listening to the victims at all, at this point.

Everybody knows they’re fake. Absolutely no-one things Taylor Swift posed for poorly lit pornographic photos, for example. That’s absolutely not the point of them.

Probably. . BUT 94% of deepfakes are NOT “teen girls [some little shit] knows”. So those teen boys are nowhere near the major offenders here, and focusing efforts there is pissing in the wind.

Education is not going to reach the people hanging out in deepfake user groups making videos of “<K-Pop singer du jour> getting violated”. They fucking know what they’re doing is wrong, and no, they can’t be reached.

But if found, they can be punished. And should be.

At the point where it makes the victim feel genuinely violated in the way Helen Mort did..

Again, I don’t know how to explain to you that, yes, a photorealistic depiction of a victim in sexual situations is more traumatic to the victim than a drawing or gluing a face on a photo. I’m just literally at a loss. Fortunately, I don’t really need to since there’s broad movement towards penalties for that behavior.

As a general rule, the less time, effort and resources a crime requires the more often it is comitted.

It’s not. I never said it would. It was intended as prevention.

I am past it. You missed a post or two.

I don’t see that as workable. The point at which somebody feels “genuinely violated” is entirely subjective. For any law to work it needs to be carefully worded with clear and defined terms.