Today marks 15 years since the service then known as TheFacebook was launched. This could be the time to trot out a list of all of Facebook’s accomplishments. But as the company grows into its teen years, the most important thing is not what Facebook has accomplished, but how the environment it’s operating in has changed.
Facebook’s next 15 years are going to look very different from its first 15 because it’s no longer a scrappy startup, having become one of the largest communication platforms on the globe. People no longer look at companies like Facebook with amazement at their ability to connect friends and family around the world. Instead, many wonder with a sense of dread how these companies might turn their family and friends into conspiracy theorists or how long it will be before trolls bombard them with hate speech.
Five years ago, the last time Facebook had a significant birthday, the company had proven to Wall Street that it was successfully transitioning to mobile, after celebrating its first quarter in which mobile advertising revenue topped $1 billion. The company had also recently launched its Internet.org program (now known as Free Basics) to bring internet access to more developing countries, with a paper entitled “Is Connectivity a Human Right?” The initiative was positioned as a humanitarian effort, and commentary about how it was also a way for Facebook to make money by getting more people to use its service was relegated to a paragraph or two in press on the initiative. Facebook then extended its reach further with pricey acquisitions of WhatsApp and Oculus.
And for the next few years, Facebook continued to fly high. Sure, some users may have been a little freaked out upon learning in 2014 that Facebook secretly ensured some people saw more negative or positive posts as part of a research project to examine the emotional impact of its platform, or that apps developed on Facebook could collect your data via your friends. But it was nothing that an apology and a pledge to do better couldn’t fix. Most of these news stories barely made it onto the average Facebook user’s radar, if at all.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
That started to change in 2016. The prior year, Facebook had started to play a larger role in distributing news content through Instant Articles and Trending Topics, as a greater portion of Facebook users reported not just sharing photos and status updates on Facebook, but also getting their daily dose of news through the platform. The company’s foray into news came right in time for the U.S. presidential election, which proved to be one of the most divisive in recent memory.
Ask many U.S. Facebook users, and they can likely pinpoint a time in the run-up to the 2016 election when one of their Facebook friends shared an obviously fake news story — such as that then-candidate Donald Trump had been endorsed by Pope Francis or that Hillary Clinton was on her deathbed — that was clearly not understood as fake by their friends. And pretty much everyone can remember an argument about which candidate to support breaking out in the comments section of at least one of their posts.
Beyond the anecdotal evidence, numerous articles found that pages and accounts dedicated to spreading hyperpartisan fake news were becoming more active on Facebook, Twitter, and YouTube. Sometimes fake news articles were spread even further by Facebook itself. Its algorithms would insert fake news stories into Trending Topics — a mistake that was made with increasing frequency after Facebook fired its human Trending Topics editors over claims that they were suppressing conservative news stories.
These kinds of unpleasant or unsettling experiences gave Facebook users all the more reason to either take a break from the platform or reconsider their use of it entirely. Then Donald Trump was elected president, delivering the Republican party a big win. Given that research has shown conservative voters in the U.S. are more likely to share fake news, his victory sparked questions about whether fake news had played a role in swaying voters. (While fake news was spread on Facebook, Twitter, and YouTube, Facebook became most associated with the issue because it had the most users.)
Zuckerberg tried to quickly pump the brakes on that line of questioning. Speaking at a tech conference held just days after the election, Zuckerberg called the notion that fake news shared on Facebook had influenced the election in any way a “pretty crazy idea,” maintaining that “voters make decisions based on their lived experience.” It was the real-life version of the “nothing to see here” GIF. But for users who watched their family and friends spend more and more time sharing fake news on the platform, what happened on Facebook was their lived experience. Zuckerberg quickly walked back his comments, but the damage was done — it was now crystal clear to Facebook users that the company either wasn’t tuned into what was actually happening on its platform, or was willfully ignoring it.
And it wasn’t just in the U.S. that fake news was making headlines. In the period leading up to the U.K.’s Brexit referendum or the Philippines’ election of hardliner Rodrigo Duterte or, most recently, Brazil’s election of Jair Bolsonaro, users turned to Facebook or its other apps — Instagram and WhatsApp — to share hyperpartisan or fake political news. In a horrifying turn of events, fake news was even spread on Facebook as the pretext for ethnic cleansing, as was the case in Myanmar.
Fears about fake news in the U.S. became more pronounced when it was revealed that one of the largest purveyors of fake news was Russian troll farm Internet Research Agency (IRA), whose content was spread by fake profiles falsely claiming to be run by people in the U.S.
But Facebook’s biggest U.S. scandal to date came when it was revealed in March 2018 that the company had failed to stop Cambridge Analytica — a data analytics firm employed by President Trump — from improperly obtaining data on nearly 87 million Facebook users and using that data to create psychological profiles of U.S. voters and target ads to them on Facebook.
This scandal crystalized for many Facebook users the idea that data they shared on Facebook could be used to create ads with the express purpose of manipulating them. And it brought home the fact that these kinds of tactics could have real-world consequences — such as perhaps swaying the election in favor of President Trump (though the effectiveness of Cambridge Analytica’s ad targeting is highly disputed).
It’s hard to overstate how great an impact the Cambridge Analytica saga has had on Facebook’s activities from March 2018 up until now. Fallout from the incident resulted in Mark Zuckerberg testifying in front of both chambers of Congress for the first time. It also prompted the company to conduct a widespread audit of developers on its platforms and to commit to creating a button that would allow users to clear data associated with their account. And it increased the sense of urgency with which Facebook moved to implement other measures to fight fake news, like rolling out identity verification for political advertisers and working with Twitter, Google, and other platforms to spot and remove foreign influence operations more quickly.
So have the last few years left Facebook more vulnerable to losing users and advertisers? Not if its last earnings call is any indication. Facebook celebrated the fact that it generated a record $16.9 billion in revenue, and despite seeing flat growth in most of North America and Europe, the company’s user count has climbed to 2.32 billion monthly active users and 1.52 billion daily active users.
Some have expressed surprise that years of news about Facebook being used to spread propaganda and suck up user data hasn’t done more damage to its numbers. But it’s unrealistic to expect a service with more than a billion users to shed hundreds of millions of users in a quarter, or even a year.
It will be hard to predict what Facebook’s next 15 years will look like until we see how countries around the world start regulating the company and other social platforms — whether that’s through data privacy laws like GDPR or laws like Germany’s, which will require Facebook to invest more resources in deleting hate speech.
But what’s clear as Facebook turns 15 is that users have more reason than ever to harbor reservations about remaining on the platform. Of course, the longer anyone uses a particular service, the more likely they are to have a bad experience with it. But users are also more attuned than ever to the dark side of social media. They no longer look at it as simply a fun way to broadcast their thoughts to the world and share photos with family and friends; they increasingly understand it as a tool that can distort their sense of reality and influence their political landscape. They might not ditch the service tomorrow — but for many, the thought of doing so is firmly engrained in the back of their mind.
This means Facebook can no longer acquire a new messaging app, unveil tech that lets you read through your skin, or build an app that collects all of a user’s smartphone activity for the purpose of market research without lawmakers and users at best expressing skepticism and at worst being creeped out. I don’t think that’s a bad thing. As the last five years have shown, what happens on Facebook has real-world consequences. And it would behoove the world — from Wall Street to Silicon Valley — to think more seriously about those ramifications, rather than mindlessly celebrating every time Facebook hits a new milestone.