Some guidelines on this topic please. I searched already, and there are several GD threads regarding hate speech, but none specifically as to how to they affect, or apply to the major Social Media platforms such as YouTube, Twitter, and Facebook et al.
So, please, my request is stay on topic. No going off on huge hijacks regarding interpretations of The Koran vs The Bible, or stuff like Christianity has more to answer for than Islam blah blah blah.
This thread is specifically about the new legislation which was passed in Australia a few days ago which exposes the senior executives of the major Social Media platforms to criminal prosecution if they fail to remove “abhorrent violent material” from their platforms in a less than “expeditious manner”.
Normally, legislation which is passed within Australia wouldn’t be world news. However, it’s a reaction to the Christchurch mass shootings, and it appears the United Kingdom is also proposing the passage of very similar legislation too. It’s reasonable to suggest if anti-hate-speech social media laws are passed in Britain, the European Union would probably follow suit too.
I’m interested in your collective views on the potential ramifications of how this will affect the major social media platforms? For example…
[ul]
[li]What will be the likely outcomes of this legislation?[/li][li]How many countries will follow Australia’s lead?[/li][li]Will social media platforms handball user details to police to avoid prosecution themselves?[/li][li]What sort of Mission Creep will happen next?[/li][/ul]
Here’s the problem with this legislation, as I see it. While it is obviously well intentioned, it represents the thin of the wedge in forcing social media platforms to be responsible for the content being shared on those platforms. And that’s basically an impossible task, in the context of how the major platforms currently operate. There are literally billions of users, and they simply don’t have enough employees to cover the possible contingencies.
Further, the problem with describing anything as hate speech (it seems to me) is the very concept of hate speech is one which is based on opinion. Something which is acceptable to one group could well be highly offensive to another group. And that, in turn raises further questions . Such as…
[ul]
[li]When does legitimate criticism morph into Hate Speech?[/li]
[li]Where is the line? Who decides what the line is?[/li]
[li]**How do we counter false positives? **[/li]
[li]Will Hate Speech legislation empower a new form of censorship?[/li]
[li]Is there a solution? Are these increased restrictions inevitable?[/li][/ul]
I am in favor of anti-hate speech legislation, but I see this as a bit of an overreaction. So much has been made about how easily accessible the live stream of the murder was, but it wasn’t really that easy as I understand it. It’s not as if some teenager just opened his facebook or twitter page and saw it on his feed - it required some effort. Moreover, the social media platforms appear to have tried very hard to remove the content, but failed to do so because it kept getting re-shared, which is not that hard to do. I don’t see what else Facebook or the other platforms could have done except to have designed better algorithms, but algorithms intended to remove violent content will also automatically freeze and remove movies and other content that are completely harmless. I think this is an example of people who have no idea how the internet works making laws about the internet.
I predict it will ultimately get watered down so that the only crime is if an executive knowingly allows the content and deliberately fails or expressly refuses to remove it. Perhaps this is a case of countries outside the United States making a legislative effort to deliberately draw a contrast between their free speech interpretations with those of our First Amendment protections, and making it clear that there will be different standards for interpreting speech. But I predict that if this actually ends up becoming law, social media will have to rethink their entire business model outside the United States and social media might look completely different going forward.
I’m confused why this law is described as being about hate speech. The part quoted says that the law covers depictions of “acts of terrorism, murder, attempted murder, torture, rape and kidnapping” and there is nothing in there about speech. The fact that the perpetrator of that particular horror was spouting hate speech was surely not the reason people wanted the recording removed quickly.
I agree with asahi and Roderick Femm. I don’t see a workable way for content-sharing entities to act as perfect content filters in real time, much less internet service providers.
You may as well have told Ma Bell that they are criminally liable if anyone curses over the telephone wire.
The proposed UK law doesn’t affect ISPs. Social media platforms are liable for the content they host in the US, too. There will likely be a safe harbor provision that protects them so long as they take reasonable steps to delete such content.
Social media platforms have a responsibility to be better and more responsible publishers. The reason that they don’t is that no one forces them to, and they are rewarded with venture capital for expanding, regardless of the nature or content of that expansion. If the stumbling block is that they have to hire people, and create content filtering software to prevent terrorists from using their platforms to broadcast terror, so be it.
What you’re missing is an ability to appreciate how the use of such material contributes to the use of hate speech. Put another way, it seems your definition of “hate speech” is limited to stuff which exists only in the form of “the written word”. The perpetrator in the Christchurch massacre filmed the shootings in real time as part of his manifesto. Only the most obtuse person would argue his manifesto is NOT hate speech. And the video footage, the bit that loads of people wanted to share, is part of that hate speech.
Make no mistake, this Australian legislation is designed specifically to prevent the use of hateful material entering Australian airwaves so that it can be used as hate speech - regardless of who has the agenda. It applies equally to the Christchurch shootings as it does to filmed beheadings by ISIS.
The problem I can see with the legislation is it only applies to mass social media. Stuff like the dark web for example remains untouchable by the look of things.
Social media is something people are still trying to wrap their heads around. Even Silicon Valley engineers who create the stuff still don’t quite yet know what to make of its impact, but like the dawn of the age of broadcast media, we’re going through a phase of disorientation, which over time will probably get worked out. We’ll probably have a number of calls for regulating social media specifically.
I’m with Roderick Femm on this one. Violent depictions of real crimes (“true threats”), and direct incitement to commit crime is not protected speech even in the US.
There is a debate to be had here about the extent to which content providers are responsible for the content they host, and how quickly they can reasonably be expected to remove improper content (non-protected speech).
But talking about hate speech, and a supposed slippery-slope, seems to me a tangent; trying to find something to be outraged about.
If I say a bunch of ethnic slurs within a video outing current US spies, it’s not the government infringing on my right to be a racist bigot if that video is removed.
Really? So the federal government in the United States could prohibit the nightly news on TV from showing the video taken by the Christchurch shooter? It’s a violent depiction of real crimes.
They could under obscenity laws, but it’s unlikely to be pushed, lest obscenity laws get even weaker, as there is at least some purported reason for showing said videos.
Though, personally, I think showing these things are actually causing more harm. They’re making the killer famous. They should stop doing that.
According to the Sydney Morning Herald article in the original post[1]:
The law in question does not seem to prescribe the exact method to be used to take down offensive content. It only says it is a crime if the content is not removed “expeditiously”. The problem I have with this law is that I find it impractical for Facebook to have done a much better job. Facebook says they took down the video within minutes of being notified by the New Zealand police. Apparently the viewing audience did not report the video until twelve minutes after it had been taken down[2].
I’m not sure how much faster you can go without real-time censors. “Within a few minutes” might be how long it takes to pick up the phone, search a video by name/location, type a video ID number, confirm that this is in fact the terrorist video described by a policeman on the other end of the phone, and propagate the kill signal.
If you are suggesting real-time censors, do you imagine one Facebook employee per livestream with their pointer finger dangling over the kill switch? Two years ago Facebook’s then-vice-president-of-video Fidji Simo said one out of every five videos is a live video[3]. Last year she said there were about 3.5 billion Facebook Live videos uploaded since the feature was introduced in 2016, at about double the average daily upload rate each year[4]. This gives me an estimate of about 12.8 million Facebook Live videos a day, today[5]. If each video were one minute long, that would take about 26,667 employees monitoring videos in real-time for 8 hours a day, 365 days this year. Facebook recommends broadcasting for at least ten minutes, and will let you broadcast for up to four hours[6]. So Facebook would need to employ at least 266,667 human censors for eight hours every day of the year. Let them watch three videos at once and we are down to 88,889 full time censors. Facebook had 35,587 employees as of December 31, 2018[7].
You mention content filtering software, but I take Facebook and Google at their word when they say the technology doesn’t exist[8][9]. Hours before the Christenchurch massacre, a piece by Fortune showed Facebook’s chief technology officer bragging about how cutting edge artificial intelligence technology could differentiate between ambiguous images of broccoli and marijuana with 88% confidence[10]. They achieved this by training the artificial intelligence on big data - millions of pictures of broccoli and marijuana[8]. They don’t have millions of examples of terrorism to train artificial intelligence on, and I hope they never will.
During the actual crisis, Facebook blocked any videos with a matching “content hash”[8]. A content hash, also known as a digest or checksum, is the product of a one-way algorithm that consistently converts a video into a unique identifying number. The concept is at least 40 years old[11]. It is not sequential and any little change in the video makes for a completely different content hash. This means a video of the massacre video gets a totally different content hash than the massacre video itself, on account of the pixel-by-pixel differences in each frame. So derivative videos slipped through until a user or censor reports them, at which point Facebook adds the new content hash to their block list.
So now Australia seems to think a mere law will cause Facebook to do better than 17 minutes? Unless just about every country passes a similar law, I don’t see Facebook hiring 90,000 human censors. It is more likely that Facebook will say, “sorry, we don’t have the tech” and stop offering videos in Australia.
Video filters aren’t the only filters that can be used. Text filters can also help, assuming the video is described by either the person making it (which I believe it was in this case) or by people commenting. And, once they identify the video and block it, they can do better than hash algorithms by analyzing the actual frames as images.
There’s also something mentioned in the articles you linked. The ability to report is hidden behind a not always visible 3-dots menu. They could put it front and center. It’s rather strange that the police were called and Facebook had time to block the video before anyone reported it. Someone had to have seen it and called the police, but didn’t figure out how to report it.
I find it unlikely that Facebook would stop offering livestreams in Australia. Most likely, they will make changes like I described, and show they are doing what they can to make the process as expeditious as possible. The law doesn’t set some limit, so even if the result isn’t much better, it still shows they are following the law.
If YouTube can find a way to make sure their automated commercials, for say Coca Cola or Nike, aren’t promoted in the middle of a racist screed from some Oklahoma basement troll I think we can find a way to solve this.
There just needs to be a financial incentive for the business who runs the platform, carrot…stick…both…either.