Watch all the Transform 2020 sessions on-demand here.
Facebook has revealed its latest efforts to curb the spread of “revenge porn” across the social network.
Revenge porn — making intimate photos or videos public without the subject’s consent — has become a whole lot easier to perpetrate in our always-on age of camera phones and messaging apps. While many countries now have laws specifically to address revenge porn, it doesn’t stop the practice from happening. And by the time an intimate image has been shared, legal retribution will likely come too late for the victim.
Like several other technology companies, Facebook has over the years introduced a number of tools to tackle revenge porn. Back in 2017, it launched a new Report button to make it easier to report intimate content shared on Facebook, Messenger, or Instagram. At the time, it also said that it would use image recognition technology to ensure that a photo detected as revenge porn is not reshared in the future.
Later, Facebook allowed its users to take a more proactive approach by submitting a digital copy of the image to help Facebook automatically block any future attempts to share it publicly.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Burden
With its latest endeavor, Facebook is striving to remove the burden from the victim altogether by automatically detecting “near nude” images or videos that are shared without permission across Facebook or Instagram.
“This means we can find this content before anyone reports it, which is important for two reasons: Often victims are afraid of retribution so they are reluctant to report the content themselves or are unaware the content has been shared,” said Antigone Davis, Facebook’s global head of safety, in a blog post.
Moving forward, Facebook’s machine learning smarts will flag visual content it thinks may constitute revenge porn, and a human employee will review the content.
“If the image or video violates our Community Standards, we will remove it, and in most cases we will also disable an account for sharing intimate content without permission,” Davis added. “We offer an appeals process if someone believes we’ve made a mistake.”
In addition to the new auto-detection technology, Facebook is also launching a support hub called Not Without My Consent, which serves up a bunch of related tools and resources for revenge porn victims.
Privacy
It is impossible for Facebook to micro-manage every single piece of content that is shared on its platform, which is why artificial intelligence (AI) has played an increasingly important part in its moderation efforts. Through the years, however, Facebook has courted controversy over the way its algorithms decide what content is pushed to the masses and what is blocked. Case in point: It removed the Pulitzer-Prize winning “napalm girl” picture for featuring nudity and banned the writer who shared it. The same year, Facebook lost a bid to prevent a lawsuit from a 14-year-old girl whose nude photos appeared on the social network.
While these cases reveal the imperfections of Facebook’s automated moderation, they also show that the company is fighting an uphill battle to salvage its reputation in a year that has been rife with privacy scandals. And despite CEO Mark Zuckerberg’s recent revelation that the company would pursue a “privacy first” approach in future, most people remain skeptical.