Watch all the Transform 2020 sessions on-demand here.
“Many people have asked why artificial intelligence (AI) didn’t detect the video from last week’s attack automatically,” said Guy Rosen, Facebook VP of integrity, in a blog post published today. “AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove. But it’s not perfect.”
In the aftermath of the horrific Christchurch terrorist attack that resulted in the death of 50 people, many are questioning the role the internet played. One writer opined that this felt like the first internet-native mass shooting, given that the perpetrator announced their intentions on 8chan and streamed the attack to Facebook — before the broader online community stepped in to share the footage endlessly across YouTube, Twitter, and Reddit.
As the attacker’s streaming platform of choice, Facebook has attracted particular scrutiny for its part in the tragedy — including how long it took to react and whether it should have done more. Facebook subsequently published a timeline, noting that the video was viewed fewer than 200 times during the live broadcast and 4,000 times before moderators removed it. In an effort to shift some of the blame, it pointed out that “no users reported the video during the live broadcast” and that the “first user report on the original video came in 29 minutes after the video started and 12 minutes after the live broadcast ended.”
In short, Facebook is saying it couldn’t have intervened during the livestream, since it wasn’t even aware of the attack.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
But what about Facebook’s automated detection smarts that can identify content that “violates our community standards”? Well, following the incident, Facebook did manage to hash the original footage and set its algorithms to find and remove visually similar videos on both Facebook and Instagram. “Some variants such as screen recordings were more difficult to detect, so we expanded to additional detection systems, including the use of audio technology,” noted Chris Sonderby, Facebook VP and deputy general counsel, in a separate blog post at the time.
Facebook said that in the first 24 hours it removed 1.2 million videos of the attack at the point of upload, with another 300,000 copies removed afterwards. If nothing else, this highlights the sheer scale of what Facebook is up against. With more than 2 billion users, moderation is bound to be an uphill battle, one that all the AI in the world can’t eliminate. Not completely, at least.
Limitations
One of Facebook’s thornier challenges is Facebook Live, its service that allows people to livestream their lives to the world — at times including acts of the utmost depravity. AI requires vast swathes of “training data” to make informed decisions — such as removing a particular photo or video. Not only does Facebook Live enable unpredictable content that hasn’t been seen before, in the case of severe acts of violence, Facebook said that (“thankfully”) there is not enough similar content to train its AI systems on.
Moreover, Facebook has increasingly embraced live video game streaming, which poses a particular problem for AI systems trained to identify violent activities.
“If thousands of videos from livestreamed video games are flagged by our systems, our reviewers could miss the important real-world videos, where we could alert first responders to get help on the ground,” Rosen said.
Another huge obstacle Facebook’s AI faces is that people are still smarter than machines in many regards and are pretty good at figuring out ways around automated filters. In the case of the New Zealand terrorist attack, a “core community of bad actors” colluded to re-upload different versions of the video, edited slightly to circumvent Facebook’s systems. Indeed, Facebook said that it identified more than 800 “visually distinct” variants of the video.
Not all of these were altered maliciously, of course. Some people contributed to the video’s distribution by recording their screen or filming footage from a screen using their phone and then sharing it with friends. Such videos were rerecorded, recut, reformatted, and so forth — leading to a gargantuan pool of videos that are just different enough to escape the grasp of Facebook’s AI systems.
Solutions
All of this should not detract from the genuinely useful role AI can play in helping companies such as Facebook moderate content. Just last week, Facebook revealed that it is now using machine learning to proactively detect revenge porn.
But it’s clearer than ever that humans will continue to play a major role in monitoring user-generated content. Moreover, Facebook will have to reevaluate its moderation methods on a platform that is simply too big for humans to monitor — even with the help of AI.
According to Rosen, Facebook is now looking at ways to spot different versions of the same video by improving its audio matching technology. Moreover, he said that Facebook is searching for ways to ensure that review of livestreamed terrorist videos can be more reliably “accelerated” and content put before a human moderator — remember, the company said it didn’t get a single report during the livestream. Facebook added that its acceleration priorities have largely focused on suicide prevention but that it will now extend these efforts to other kinds of video — which sounds like it may need to come up with a completely new category such as “murders” or “mass killings.”
“We are re-examining our reporting logic and experiences for both live and recently live videos, in order to expand the categories that would get to accelerated review,” Rosen said.
One potential solution would be to add a time delay to live videos, but according to Rosen, that wouldn’t fix the problem, given the sheer number of videos that are streamed through the site — in other words, people would still see the videos, just a few minutes later.
“More importantly, given the importance of user reports, adding a delay would only further slow down videos getting reported, reviewed, and first responders being alerted to provide help on the ground,” Rosen said.
If there were any lingering expectations that AI is ready to solve the platform’s content moderation problems on its own, consider this: Facebook has hired around 15,000 human content reviewers globally. While the mental health impact of having to spend hours of the day looking at horrendous photos and footage is a serious concern, these human moderators also serve as a stark reminder that AI has a ways to go before it can be left to its own devices — if that day ever comes.
“AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect,” Rosen added. “People will continue to be part of the equation, whether it’s the people on our team who review content or people who use our services and report content to us.”