Facebook’s decisions earlier this month to combat fake news by reducing the amount of news-outlet posts users see and relying on users to validate media sources have been hot topics of discussion among advertisers, the media, and anyone following the story of fake news’s damaging influence on politics. Unfortunately, Facebook’s approach may make fake news more prevalent in users’ feeds, as it has in test-market countries. Why? There are multiple reasons, but the main one is that fake news isn’t news; it’s a type of fraud. And stopping fraud requires a different strategy than the one Facebook is pursuing. With the right AI, Facebook could reduce fake news while still allowing users easy access to legitimate news.
Facebook’s idea is that by deprioritizing news outlets and other content creators in users’ news feeds, fake news will become less visible. This strategy is based on the assumption that fake news is just a passive menace. However, revelations in the wake of the 2016 election showed that a lot of fake news on social media is created and promoted by fraudsters and propagandists whose clear intent is to gather political capital, influence the course of public debate, and affect the outcome of elections. These actors are not inept or craven reporters and editors who turn out shoddy work. They are professional criminals and political operatives who will find ways to work around Facebook’s new system while legitimate media outlets have their reach constrained.
To compound the problem, Facebook’s new model prioritizes stories shared and “trusted” by individual users. But users who fall for fake news don’t check sources. Credulous users, especially those looking for supporting evidence in an online political debate, will share fraudulent stories on their personal pages, giving these fake stories more exposure and perceived credibility. And as legitimate news stories are seen by fewer users and shared less often, there may be a downward spiral of information quality.
Fraudulent information will gain ground
Legitimate media players will follow the rules Facebook sets for them and diminish in influence unless they have the money to buy advertising on the network. Fraudsters, on the other hand, will game the system at every turn to get their messages into Facebook users’ feeds. For example, we can expect fraudsters to create more false profiles to “friend” unwary Facebook users and share fake stories. They might also dupe influencers into sharing fake news or they might share misinformation in the comments on influencers’ posts. And they will definitely keep turning out fraudulent stories with click-bait headlines to hook gullible readers. Fraudsters will approach these tasks like it’s their job, because it is. And with fewer factual, professionally crafted news stories in the feed to balance this continuous stream of fraudulent, shareable narratives, even media-savvy Facebook users may eventually have trouble sorting fakery from legitimate news.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
We have already seen this happen in several test countries where, in October, Facebook removed almost all legitmate news from users’ main news feeds, putting it instead in a new “Explore” tab. In one of those test markets, Slovakia, the new system was followed by a precipitous drop in Facebook traffic to a legitimate news site, while a fake news item about a planned terrorist attack gained so much currency that police issued a denial to calm the public, the New York Times reported. And while the fake story appeared in Slovak users’ Facebook feeds, the police message was excluded “because it came from an official account.” Facebook’s Head of News Feed, Adam Mosseri, wrote at the time of the test rollout, “We currently have no plans to roll this test out further.” What Facebook has now rolled out may not be as extreme as in those test markets, but it’s capable of causing the same degree of damage.
Political discourse at risk
By prioritizing personal posts over news, Facebook is attempting to get back to its origins as a place for people to post about their lives. (The title of its press release announcing the news cutback was titled “Bringing People Closer Together”). However, Instagram has become a very popular platform for personal posts; people now expect to discuss news and politics on Facebook. A report by the Pew Research Center found that about a third of users talk politics on social media, with younger users seeking out campaign news and older users commenting and debating.
As the 2018 midterm and 2020 presidential elections approach, political discussions will increase. That’s good for civic engagement, as long as Facebook users can find legitimate, reliable news to inform their opinions and underpin their discussions. That looks less likely now than before because of the way Facebook’s new approach stacks the deck in favor of fraudulent news.
Again, we can look at the test markets where Facebook has been removing news from the main news feed since October to get a sense of what could happen next. In Cambodia, the Times reported, publishers and service groups say Facebook’s deprioritization of legitimate news makes it hard for people living under the repressive regime to find news that’s not controlled by the government. Bolivian journalists report similar problems. It would not be surprising to see the same thing happen on a wider scale with Facebook’s wide rollout of the reduced-news news feed this month.
Artificial intelligence can fight fake news
Instead of throttling the flow of valid news to its users and relying on users to suddenly stop sharing fake news instead of the “boring” real news, Facebook should find a better fix. Specifically, it could deploy AI to screen out fake news now and keep up with fraudsters’ evolving tactics over time. To do this effectively, the company would need to have a team of humans compile large sets of real news from reputable, established sources and fraudulent news of the types we saw before the 2016 election. From these datasets, the company could develop classification algorithms to continuously analyze news sources, identify fraud markers, separate valid stories from false information, and flag and remove fake news from the platform going forward, without impeding the spread of real news.
By treating fake news as fraud rather than defective journalism, and fighting fake news with fraud-detection tools that other industries use, Facebook could make its network a more effective platform for true civic discourse and political discussion — one where users can still find real news.
Bernardo Lustosa is Chief Operations Officer at fraud-detection firm ClearSale. He also works as a professor at São Paulo’s Fundação Instituto de Administração, where he instructs MBA candidates in data mining for fraud management.