Text filters are a good idea but a bad actor could circumvent that by foregoing the text description and either recording a printed manifesto or reciting one. It would certainly help flag broadcasts by analyzing their comments, though written speech is difficult to analyze especially when dealing with in-jokes or abstract ideas.
Remember that there were bad actors aside from the perpetrator. Facebook says “we saw a core community of bad actors working together to continually re-upload edited versions of this video in ways designed to defeat our detection.”
And I don’t know if technology is fast enough to analyze live videos frame-by-frame. Facebook CTO Mike Schroepfer says they can distinguish two images in hundredths of a millisecond, or billions of times per day. If Facebook had 12.8 million Live broadcasts each lasting 10 minutes, that would be 7.68 billion seconds of video to process in real time. Let’s say the videos have 24 frames per second. That’s 184.32 billion frames to process per day. By comparison there are 8.64 billion “hundredths of a millisecond” in a day. If Facebook’s best technology needs a hundredth of a millisecond to distinguish two images, that is about 5% the speed required. This is assuming you wanted to take down videos by checking for one specific frame. The time required would grow exponentially if you needed to check for more than one specific frame.
I agree with you wholeheartedly about the user interface, though I think Facebook will streamline their “reporting experience” without regard to the Australian law. And on second thought, you are probably right about Facebook’s response to the Australian law. But then all we have is a law that doesn’t address the problem it was explicitly designed to address.
My impression from this Washington Post article is that Youtube/Google, with all of their technology, could not handle the crisis.
The article explains that Youtube primarily blocked videos with content hashes; they reduced exposure by recommending videos from established news sources during the crisis; their artificial intelligence was confused because the shooter’s name was not publicized; and in the end it wasn’t enough.
Wait a minute. I’m sure I could go on YouTube right now and find a whole bunch of videos containing footage of the airplanes flying into the World Trade Center. Under this law, would YouTube be required to take down all of those videos?