Social networks today are undergoing a massive paradigm shift as companies realize that content moderation and positive reinforcement can actually increase user retention and encourage growth. Join veteran analyst and author Brian Solis, and Two Hat CEO Chris Priebe, to learn how moderation can make your social platforms safe, more engaging, and ultimately, more profitable.
Most developers know if you want to boost the success of your app, website or game, you add a social element, introduce a community to encourage interaction and content sharing, says Chris Priebe, CEO and founder of Two Hat Security.
“Our studies have shown that users who engage in chat or in social features are three times more likely to come back on day two, and three times more likely to come back again on day seven.” Priebe says. “That’s huge. People stay longer and they pay more. But too many products have died early because they didn’t really think through the social dynamic of what they were creating.”
Likewise, it’s a better experience for the user, as it becomes a destination, and in the case of gaming, for example, a reason for returning above and beyond even the pleasure of playing the game — even if they lose interest in the initial draw of your product — they’ll keep coming back because that’s where their friends are.
“If we can help them find those hooks, they make friends and keep friends online, and they stay for years,” he explains.
The other big problem, and one that’s becoming increasingly serious, is that a user might love the product, but the community is toxic, and drives users away in droves. Studies have shown that a user who experiences negative behavior is three times more likely to quit than a user who doesn’t. And if you lose their eyeballs, then lose their subscriptions, you’re doomed.
“If there’s anything that’s going to cause you to lose three times your users, more than any other feature you could possibly create, that must be your most important element that you have to be fixing — that’s a huge cost,” he says.
He does the math: If you want to buy a loyal user on the Apple store to come and use your app, it costs $7.52 worth of advertising, and on Android, about $2.05. If someone needs a million users to be a successful product, they’ll have to spend $7.5 million to get their users.
“If you have this drain in the background sucking down three times as many users, you’re just taking $7.52 times a million and throwing it down the drain,” Priebe says.
Tackling the problem requires both humans and technology, he says. AI is incredibly advanced, and can recognize patterns and images, but the final judgement call often needs to be made by a human, who can understand the context of a photo or other content, as well as the shades of nuance in an image between, say, a person cooking a meal and a person cooking a bomb, or the difference between Michaelangelo’s David and hard-core porn.
But AI is providing an essential assist, enabling companies to track patterns and identify problematic trends. He describes the technology as an antivirus — in this case, it’s a social virus, where instead of trying to hunt down harmful computer programs, we’re looking for hateful and abusive content.
For instance, something like the possibly apocryphal “blue whale challenge,” in which it was reported that users on social media sites were encouraging teenagers to complete a series of tasks, leading up to self-harm and suicide, could be spotted swiftly as a new, harmful trend, and then be templated and added to an AI model. Those posts can be flagged for review, and sites can also be protected proactively from those kinds of posts.
The key is determining what kind of social environment you’re creating, Priebe says, and then creating your content rules with intent — a nightclub versus a children’s park, for example, each of which has a different set of expectations for your users — and then make those expectations explicit.
“That will start defining what the boundaries are,” he says. “Whenever a user first arrives, there needs to be social cues, like terms of use, and they should be much more blatant than they are now. People should have much more explicit instructions when they enter a new social network, and then you need to set the threshold at which behavior that does not meet your expectations is penalized.”
That means two lines of defense: The boundaries that you set up, and then the point at which offensive behavior goes enough beyond the line to get reported by other users. When other users report it, you can let AI handle the blatantly obvious cases, while humans take care of the ones that require exceptions and empathy and forgiveness and understanding and context.
To learn more about establishing guidelines for your online community, how to set up your defenses while still encouraging social interaction, and the benefits of smart moderation, don’t miss this VB Live event!
Don’t miss out!
You’ll learn:
- How to start a dialogue in your organization around protecting your audience without imposing on free speech
- The business benefits of joining the growing movement to “raise the bar”
- Practical tips and content moderation strategies from industry veterans
- Why a blend of AI+HI (artificial intelligence + human interaction) is the first step towards solving today’s content moderation challenges
Speakers:
- Brian Solis, Principal Digital Analyst at Altimeter, author of “Lifescale”
- Chris Priebe, CEO & founder of Two Hat Security
Sponsored by Two Hat Security