Perhaps the most dystopian thing I’ve heard this year — and AI reporters hear a lot of dystopian things — came from Facebook executive Regina Dugan, head of the social media giant’s Building 8 research lab. During the F8 developer conference in April, where she unveiled next-gen tech being developed to feel words and type with your brain, she paused to say nobody has the right to read your mind.
“Now, to be clear, we are not talking about decoding your random thoughts. That might be more than any of us care to know, and it’s not something any of us should have a right to know,” Dugan said.
Facebook is a social media empire, one of the biggest communication providers in human history. It is a company so advanced in communication that it’s actively working on reading your brain activity to drive social interaction and give you the ability to type 100 words a minute with your mind.
So why, as the company asks its community of more than two billion users for help answering tough questions like how it should tackle terrorism or what happens to an account when a person dies, is Facebook asking people to send them an email?
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
One of two things appears to be happening here: Either Facebook doesn’t want to use its own platforms to talk about its shortcomings, or — and this seems just as likely — Facebook just doesn’t care about your feedback and its call for input Thursday was an empty gesture.
Facebook has pages, groups, Facebook Live video, and the relatively new augmented reality experience Spaces. And of course, Facebook isn’t just Facebook. It’s also WhatsApp, Facebook Messenger, and Instagram. Together, Messenger and WhatsApp are two of the largest chat apps on the planet. The two respectively demand 2.4 billion monthly active users.
On Thursday, Facebook published a blog post in which it announced that it would be publishing a series of posts that venture into topics that have recently put Facebook in a negative light, like:
- Who gets to define what’s false news — and what’s simply controversial political speech?
- Is social media good for democracy?
- How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?
In the first installment of Hard Questions, Facebook got into how it removes terrorism-related content from its platform. More humans will be hired to review questionable posts, and new AI is being used to detect the specific words used by Facebook users to flag content, but the post reads more like a business covering its ass among governments, regulators, and legal battles rather than an attempt to get into the weeds on how it will do things differently in the future.
German courts are currently considering the question of what should happen with a person’s social media account after they die, and a German government minister recently suggested that social media platforms like Facebook be fined up to €50 million if they fail to remove hate speech in a timely manner. British parliamentary officials have made similar recommendations following a critical report from the House of Commons that states that Facebook has repeatedly failed to remove content related to terrorism or hate speech in a timely manner.
More to the point of its call-to-action from the Facebook community, the terrorism post contains no input from Facebook users. I suppose that makes sense since the terrorism post was published just an hour or so after the launch of the Hard Questions series Thursday, but that’s not sufficient.
Facebook can tell you about every post by close friends, about events, about new pictures, and when a friend is livestreaming. To address the terrorism question and others, Facebook should use Facebook.
The company could put a prompt in News Feeds everywhere inviting friends to have a conversation on the topic on Messenger or in Spaces, or even gather responses with a Messenger bot.
If Facebook is serious about having these conversations, it can use its own platforms, and bring its wealth and full arsenal of communication tools to bear. It can turn its massive machine around and listen to the world in chat, social media, video, and even augmented reality, not just ask people to send them an email.