News about Facebook in the week leading up to the European Union Parliamentary elections reinforced the image of a company unable to control a platform that has been hijacked by right-wing extremists.
In the U.S., pundits, politicians, and journalists were shocked — shocked! — that Facebook refused to remove a doctored video of House Speaker Nancy Pelosi.
While the video gained millions of views, a Facebook spokesperson offered a non-explanation for the decision, saying: “We think it’s important for people to make their own informed choice for what to believe.”
But in Europe the picture was even grimmer.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
A study dubbed “Fakewatch” released by global citizens’ movement Avaaz, which monitors election freedom and disinformation, found what the group identified as 500 suspicious pages and groups linked to right-wing and anti-EU organizations that were spreading massive disinformation.
“Far-right and anti-EU groups are weaponizing social media at scale to spread false and hateful content,” the study says. “Our findings were shared with Facebook and resulted in an unprecedented shutdown of Facebook pages just before voters head to the polls.”
Indeed, Facebook subsequently took down 77 of the pages and groups reported by Avaaz. Overall, those 500 pages and groups were followed by 32 million people and had more than 67 million “interactions” (comments, likes, shares) over the past three months.
This has become a standard cycle in Europe. Facebook says it is cracking down on such content, but it is often Avaaz reporting the malfeasance and abuse that prompts action. Last month, Avaaz reported widespread abuses of Facebook and WhatsApp ahead of Spanish elections, after which Facebook belatedly removed some pages and groups.
In France, Avaaz found in another report that Facebook had generated more than 105 million views of fake news about the country’s Yellow Vest movement.
Avaaz campaign director Christoph Schott said bad actors’ tactics have evolved, as they now play a long game when it comes to manipulating Facebook’s platform. For instance, he said many of these groups started several years ago under innocuous names and focused on mundane subjects. At some point, the groups would change names and become funnels for right-wing content targeting people who hadn’t necessarily signed up for such topics.
In other cases, the groups lie dormant for years and then are suddenly re-activated to help extremist content ricochet around Facebook’s platform, he said. As a result, it’s hard to know if Facebook is fighting a losing battle or is just not fighting. But Avaaz thinks the platform could and should do more.
“We are 30 people working on this stuff for three months,” he said. “It’s all stuff Facebook could have detected, like page name changes. This should not be allowed to happen. Right now, it seems that Facebook is ignoring its own rules.”
Rather than simply forcing groups to remove fake content, Avaaz advocates requiring Facebook to show corrected or verified information to everyone who saw the original malicious content.
Alas, Avaaz wasn’t the only research group to publish alarming findings. The Oxford University-based Computational Propaganda Project came out with a study looking at the spread of junk news during the EU election season. The report noted: “On Facebook, while many more users interact with mainstream content overall, individual junk news stories can still hugely outperform even the best, most important, professionally produced stories, drawing as much as 4 times the volume of shares, likes, and comments.”
But ironically (or perhaps not), the most damning statistic last week came from Facebook itself. In its latest Community Standards Enforcement Report, the company revealed that it had taken down 2.19 billion fake accounts during the first three months of 2019. That’s up from 1.2 billion accounts disabled during the last three months of 2018.
“The [number] of accounts we took action on increased due to automated attacks by bad actors who attempt to create large volumes of accounts at one time,” the company said in a blog post.
Those almost 3.4 billion bad accounts over six months are compared to the 2.37 billion active monthly users the company reported it has. Facebook says it is confident that the majority of these bad accounts were zapped almost as soon as they were created and had little impact.
But Schott said he found the trend disturbing, another sign that right-wing groups are investing even more time and resources into harnessing Facebook’s platform to spread their messages.
“It shows the number of actors trying to do damage,” Schott said. “And that’s frightening.”