Watch all the Transform 2020 sessions on-demand here.
Facebook today published its latest Community Standards Enforcement Report, the first of which it released last May. As in previous editions, the Menlo Park company tracked metrics across a number of policies — bullying and harassment, child nudity, global terrorist propaganda, violence and graphic content, and others — in the previous quarter (January to March), focusing on the prevalence of prohibited content that made its way onto Facebook and the volume of this content it successfully removed.
AI and machine learning helped cut down on abusive posts a great deal, according to Facebook. In six of the nine areas tracked in the report, the company says it proactively detected 96.8% of the content it took action on before a human spotted it (compared with 96.2% in Q4 2018). For hate speech, it says it now identifies 65% of the more than four million hate speech posts removed from Facebook each quarter, up from 24% just over a year ago and 59% in Q4 2018.
Facebook is also using AI to suss out posts, personal ads, pictures, and videos that violate its regulated goods rules — i.e., those that forbid illicit drug and firearm sales. In Q1 2019, the company says it took action on about 900,000 pieces of drug sale content, of which 83.3% were detected proactively by its AI models. In the same period, Facebook says it reviewed about 670,000 pieces of firearm sale content, of which 69.9% its models detected before content moderators or users encountered it.
Those and other algorithmic improvements contributed to a decrease in the overall amount of illicit content viewed on Facebook, according to the company. It estimates that for every 10,000 times people viewed content on its network, only 11 to 14 views contained adult nudity and sexual activity, while 25 contained violence. With respect to terrorism, child nudity, and sexual exploitation, those numbers were far lower — Facebook says that in Q1 2019, for every 10,000 times people viewed content on the social network, less than three views contained content that violated each policy.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
“By catching more violating posts proactively, this technology lets our team focus on spotting the next trends in how bad actors try to skirt our detection,” Facebook vice president of integrity Guy Rosen wrote in a blog post. “[We] continue to invest in technology to expand our abilities to detect this content across different languages and regions.”
Yet another domain where Facebook’s AI is making a difference is duplicitous accounts. At the company’s annual F8 developer conference in San Francisco, CTO Mike Schroepfer said that in the course of a single quarter, Facebook takes down over a billion spammy accounts, over 700 million fake accounts, and tens of millions of pieces of content containing nudity and violence. AI is a top source of reporting across all of those categories, he says.
Concretely, Facebook disabled 1.2 billion accounts in Q4 2018 and 2.19 billion in Q1 2019.