Watch all the Transform 2020 sessions on-demand here.
Following sharp criticism from the leaders of European nations, as well as concerns from its own community, Facebook is training artificial intelligence to target terrorist messaging and propaganda on its platform. This AI will target Muslim extremists like ISIS but will also be aimed at any group with a violent mission or that has engaged in acts of violence, a source close to the company has informed VentureBeat. A broader definition of terrorism could viably include gang activity, drug lords, or white nationalists who endorse violence.
Facebook is currently testing the use of natural language processing trained to proactively identify posts for removal based on the kinds of words used by accounts that have already been suspended.
“We’re currently experimenting with analyzing text that we’ve already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda. That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts,” wrote Facebook’s director of global policy management Monika Bickert and counterterrorism policy manager Brian Fishman in a blog post.
The social network highlighted a series of initiatives to keep extremist content in check, including the use of computer vision to recognize photos and videos associated with terrorists, keeping fake accounts off Facebook, and seeking counsel from a team of counterterrorism experts.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Earlier this month, following a terrorist attack in London that killed seven, British Prime Minister Theresa May called on nations to create international agreements to “regulate cyberspace, to prevent the spread of extremist and terrorism planning.” She urged social networks like Facebook and WhatsApp (owned by Facebook) to take action, blaming them in part for the crime because their platforms gave terrorists a “safe place” to operate.
In March, a German government minister said companies like Facebook and Google could face up to $53 million in fines for failing to do enough to curtail hate speech. British members of Parliament have endorsed similar fines.
Shortly after May’s comments, Facebook director of policy Simon Milner said the company wants to make its social network a “hostile environment” to terrorists.
In this post and in others in the past, Facebook has said it removes hate speech from its platform in a timely manner. Not so, says a critical report from the British House of Commons Home Affairs Committee on hate crimes and extremism. The report, released in April, states that companies like Facebook, Twitter, and Google should handle security on their platforms just as soccer teams are required to provide additional security at matches.
“Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way,” the report stated. “We recommend that the government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.”
While AI may get a lot of attention in the press these days, the tAI that’s used to target terrorism still depends on people as part of the process. Computer vision, for example, can recognize and remove explicit content — like a beheading video — on its own, but other posts require human assistance to understand and consider context when deciding whether to pull a Facebook post, a source close to the company told VentureBeat.
Last month, Mark Zuckerberg said the company intends to hire 3,000 additional people for the Facebook Community Operations team, up from 4,500 today. The Community Operations team reviews content deemed potentially inappropriate or in violation of Facebook policy.
The blog post about terrorism was the first in a series of posts called “Hard Questions,” in which Facebook promises to share details about things such as what happens to a person’s online identity after they die and whether social media is good for democracy. Facebook has asked people who want to share ideas on how it can stop the spread of terrorism online or who have responses to other hard questions being considered to send emails to: hardquestions@fb.com.