Facebook has announced plans to reduce the number of “low-quality web page experiences” from users’ News Feeds.
More specifically, the target of the company’s latest effort to prevent users from ditching the social network are “misleading, sensational and spammy” posts that encourage people to click, only to disappoint through offering little in the way of useful content. Such pages may also include “disruptive, shocking or malicious ads,” according to a statement issued by Facebook earlier today.
Though Facebook already had a policy in place to dissuade advertisers from serving crappy ads, the company is now upping its effort to prevent such posts from appearing in users’ feeds at all. To do so, Facebook said that it reviewed “hundreds of thousands” of web pages linked from Facebook, and highlighted the ones that provided little useful content or terrible ads. Tapping the wonders of artificial intelligence (AI), the company will automate the process of identifying future posts that contain similar characteristics, and thus they will show up further down a person’s feed.
The new algorithm will be rolling out over the next few months.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
“Publishers that do not have the type of low-quality landing page experience referenced may see a small increase in traffic, while publishers who do should see a decline in traffic,” the company said. “This update is one of many signals we use to rank News Feed, so impact will vary by publisher, and Pages should continue posting stories their audiences will like.”
Facebook is facing a growing battle against low-quality and misleading content. Earlier this week, it took out a full-page newspaper ad to help U.K. citizens detect fake news. And artificial intelligence is also playing a bigger part in the company’s efforts to manage content posted by its near two-billion users — earlier this year it revealed it was trialing new suicide prevention tools that use AI to recognize patterns using posts that have previously been associated with suicide.