Facebook has responded to allegations that emerged over the weekend claiming the company turns a blind eye to controversial Pages if they have a lot of followers — because they generate a lot of revenue.
The latest scandal comes just a few months after Facebook published its internal guidelines for how it enforces content takedowns.
The social networking giant seems to spend more of its time these days fighting negative headlines than pushing new features out to its 2 billion-strong global throng of users. Facebook-owned WhatsApp is now in the firing line over its role in the spread of fake news in India, which led to a series of lynchings. Meanwhile, the U.S. is expanding its investigation into the Cambridge Analytica data-sharing debacle, the U.K. fined Facebook over the episode, Australia is mulling lawsuits, and now Facebook is facing a fresh scandal over its content moderation practices.
An investigation by U.K. broadcaster Channel 4 has shed some light on the decision-making processes that go on behind the scenes at Facebook. A documentary called Inside Facebook: Secrets of a Social Network is scheduled to be broadcast tonight in the U.K., but details of its content have already been divulged to the press.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
To recap, a Channel 4 reporter worked undercover for CPL Resources, an Ireland-based contractor that Facebook uses to resource its content moderation team. The documentary reportedly found that Facebook demonstrated vastly inconsistent policies when deciding what content and Pages to remove from the social network, with a process called “shielded review” protecting Pages from at least one far-right activist with many followers. So even if a Page contains multiple pieces of content that contravene Facebook’s takedown policies, it may remain online if it passes a second moderation stage, where in-house Facebook staff make the final decision.
One moderator reportedly told the reporter that one far-right activist’s Page was allowed to remain, even though it repeatedly broke the takedown policies, because “they have a lot of followers, so they’re generating a lot of revenue for Facebook,” according to the Guardian.
The documentary will also apparently show that racist and abusive content is frequently allowed to remain on the social network. And children who are visibly under the age of 13, the minimum age to use Facebook, are allowed to remain on the platform unless they explicitly state their age in a post.
Blind eye
Ahead of the program’s broadcast later today (9 p.m. BST, 1 p.m. Pacific), Facebook’s VP of global policy management, Monika Bickert, sought to douse the flames before the public witness the goings-on firsthand.
“It’s clear that some of what is in the program does not reflect Facebook’s policies or values and falls short of the high standards we expect,” she said. “We take these mistakes incredibly seriously and are grateful to the journalists who brought them to our attention.”
The company said that it is requiring all its trainers in Dublin to “do a re-training session” and will soon require all its trainers globally to do the same. “We also reviewed the policy questions and enforcement actions that the reporter raised and fixed the mistakes we found,” Bickert added.
But when addressing the specific question of whether Facebook turns a blind eye to “bad content” because it is in the company’s commercial interests, Bickert stated adamantly that this is not the case. “Creating a safe environment where people from all over the world can share and connect is core to Facebook’s long-term success,” she said. “If our services aren’t safe, people won’t share and over time would stop using them. Nor do advertisers want their brands associated with disturbing or problematic content.”
There is truth to that, of course, but if Facebook was to censor content and Pages too much, people would drift away from the platform.
It’s a similar predicament to the one Twitter finds itself in. On the one hand, Twitter is sometimes proactive in taking down controversial tweets or blocking users who repeatedly violate standards policies. But world leaders such as Donald Trump have unfettered access to the social network — they can say what they like. Why? Officially, it’s because:
Blocking a world leader from Twitter or removing their controversial Tweets would hide important information people should be able to see and debate.
Unofficially, it’s because world leaders such as Donald Trump bring a lot of users to the platform and keep them coming back. If Twitter blocked Donald Trump, the U.S. president would more frequently turn to an alternative social network — such as Facebook.
And that points to the underlying reason Facebook’s takedown policies appear to be inconsistent. Part of it comes down to the interpretation of the individual moderator, but ultimately Facebook doesn’t want to deter people from posting content. In light of other scandals that have hit Facebook in recent times, such as the Cambridge Analytica saga, the company can’t afford to be overly forceful with its takedowns, especially if a far-right activist’s Page has a million followers.