Skip to main content

Zuckerberg: Facebook’s security investments will ‘significantly impact’ profitability

Facebook CEO Mark Zuckerberg on stage at the company's F8 developer conference in San Francisco, Calif., in October 2015.

Facebook today revealed that, for the first time in the company’s history, it has surpassed $10 billion in quarterly revenue. But on a nearly hour-long earnings call between company executives and financial analysts, the focus was primarily on security to prevent Russian meddling like the kind seen during the 2016 U.S. presidential election.

500 million daily active users on Instagram or $10 billion in revenue doesn’t matter, Zuckerberg said in a prepared statement at the start of the call, “if our services are used in a way that doesn’t bring people closer together or the foundation of our society is undermined by foreign interference.”

“In many places, we’re doubling or more our engineering efforts focused on security and we’re also building new AI to detect bad content and actors just like we’ve done with terrorist propaganda,” he said. “I am dead serious about this, and the reason I’m talking about this on our earnings calls is that I’ve directed our teams to invest so much in security on top of our other investments we’re making that it will significantly impact our profitability going forward, and I wanted our investors to hear that directly from me. I believe this will make our society stronger, and in doing so will be good for all of us over the long term, but I want to be clear about what our priorities are.”

The focus of investments in security, Zuckerberg said, “goes beyond elections” and includes efforts to combat hate speech, bullying, and fake news. The changes underway at Facebook are taking place as members of Congress propose new legislation to regulate political advertising online. Zuckerberg called the issue a national security threat and said “it’s part of our responsibility to society overall.”


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Facebook has committed to doubling the headcount of its safety and security workforce that does things like review ads or flag hate speech — from 10,000 to 20,000 in 2018. Due to investments in infrastructure for growth and spending to bolster security, Facebook CFO Dave Wehner said capital expenditures in 2018 are forecast to double from $7 billion to $14 billion. He added that some of those additional 10,000 people may not be Facebook employees but rather partner businesses.

Wehner also talked today about advancements made in Q3 to find duplicate and inauthentic Facebook accounts. Duplicate accounts make up roughly 10 percent of Facebook’s 2 billion monthly active users (MAU), up from a previously stated 6 percent, and inauthentic accounts make up 2 to 3 percent of global MAUs.

Last week, Facebook announced that in 2018 the company will roll out a tool for tracking who paid for ads, whether those ads are of a political nature or not. Political ads will come with a label, and for federal elections like Congressional or presidential races, users will be able to see how much money was spent, an archive of previous ads, and demographics a Facebook page attempted to reach. Machine learning will be used to track down political advertisers and require them to verify their identity.

COO Sheryl Sandberg took time during the call to highlight efforts to get rid of malicious content beyond political advertisements.

“Because the interference on our platform went beyond ads, we’re also increasing transparency around organic content from pages. We’re looking at ways to provide more information about who’s behind a political or issue-based Facebook page. We believe this will make it harder for deceptive pages to gain large followings and make it easier for us to identify malicious activity,” Sandberg said.

Zuckerberg also fielded questions about whether security measures could impact engagement on Facebook and whether existing AI solutions made by Facebook can be used to mitigate increasing security costs.

“You’re definitely right that a lot of the AI research that we do is applicable to multiple areas,” Zuckerberg said in respond to a question about the use of existing AI. “But we still need to build those tools, so it takes a lot of engineering investment, and we will be prioritizing that, in some cases by adding people to teams and in other cases by trading off and doing more security work instead of product work we might have done. But this is really important and this is our priority.”

The earnings call and comments by Zuckerberg, Sandberg, and Wehner coincided with the second day of testimony before a Senate Judiciary Subcommittee about Russian meddling in U.S. election results. Testimony was offered by Facebook chief counsel Colin Stretch, alongside legal representatives from Twitter and Google.

A month ago, we learned that 10 million U.S. users saw Russia-linked ads on Facebook. Two days ago, it was revealed that Russia-linked posts likely reached 126 million people.

Concern over election meddling isn’t limited to the United States. The tool for political advertising info will roll out first in Canada, where national elections are scheduled to take place next year. And Facebook continues to field questions about ads to manipulate German voters, following elections there in September.