Facebook is now giving users a score between 0 and 1 to help determine whether or not they’re accurately reporting instances of fake news, the Washington Post reports.
Three years ago, Facebook first gave users the ability to report posts in their News Feed for containing false or misleading news. The hope was that users could help Facebook stop hoaxes from spreading more quickly.
However, Facebook quickly discovered that people weren’t always reporting fake news because it was actually fake. Often, they were just flagging posts they disagreed with. The company also started employing third-party fact-checkers last year to be the final determiners of whether or not a post should be labeled as fake news — but that still doesn’t stop people from incorrectly flagging articles in the first place.
“If someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false news feedback more than someone who indiscriminately provides false news feedback on lots of articles, including ones that end up being rated as true,” Facebook product manager Tessa Lyons, who confirmed the existence of the score, told the Washington Post in an email.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
The Post reports that the score is just one criteria Facebook uses to determine whether a story should be reviewed further by its fact-checkers. But it’s not giving away many more details about what goes into the algorithm, to stop people from gaming the system. Facebook also did not tell the Post when it started giving out credibility scores. A Facebook spokesperson, in a statement provided to VentureBeat, emphasized that it’s not a “centralized” score given to all people who use Facebook — it’s just used on its efforts to flag and identify fake news.
This so-called reputation score highlights another way Facebook is attempting to deal with repeated instances of bad actors attempting to flout Facebook’s terms of service, or target individuals they disagree with, in this case by falsely reporting their posts as hoaxes.
Earlier this year, Twitter also announced that it would be taking into account more behavioral signals when determining which search results and replies to show to users, such as how often a person gets blocked by people they reply to. The idea is that by placing more weight on these signals, Twitter can limit the ability of trolls and bad actors to hamper someone’s experience on Twitter.
It’s understandable why Facebook would want to take into account a user’s past behavior to determine how credible they are likely to be going forward, but the news raises questions about how transparent social platforms needs to be when making changes to its algorithms that affect which users it is more likely to listen to, or whose posts will be shown more prominently. It’s unclear when Facebook was planning on revealing to users, if ever, that they were being judged on how accurately they have been flagging posts.
Updated at 11 a.m. Pacific: Updated with further information from Facebook on how the score is used.