As long as there are power imbalances, insecurities, and competition among humans, so too will there be bullies. Name a decade, a century, and a place, and an era-equivalent to the toilet swirlie doubtless occurred.
The digital age amplified this dynamic to the extreme, empowering those who seek to intimidate their targets with a bigger “toilet” to flush faces in: the internet. Bullying was once confined to schoolyards, but today’s kids (and some adults) take it everywhere they go. They take it on their phones, on their computers, on their tablets, and anywhere else connected to the web, where our online identities are all but hopelessly entwined with our physical ones.
Cyberbullying is a serious problem that has claimed the lives and sanity of too many — and one that has not been helped by the growing ubiquity of the internet. The influence of the web in our day-to-day lives is not going away anytime soon. So it only makes sense that technology, which has abetted bullying, will be part of the solution too.
Who and what can drive this force for good? Here’s a bold proposition: The First Lady of the United States, using artificial intelligence.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Battling online bullies with bots
FLOTUS Melania Trump proposed to tackle cyberbullying as a benchmark issue earlier on in her husband’s administration, a crusade she finally returned to this September during a U.N. speech. Ironic though it may seem given the President’s tweeting habits, let’s take Melania at her word and hope that she uses every advantage at her disposal to make a difference.
Might Melania consider tackling tech with tech? And specifically, by supporting AI initiatives?
Using AI as a way to identify online abuse as it occurs is a striking idea, certainly, but not nearly as far-fetched as it sounds. If anything, it would be a shame to overlook an opportunity to address the issue of cyberbullying while also promoting AI as something that can help humanity instead of hurt it.
How would this work? Currently, the burden of identifying and penalizing online abuse falls on human moderators, of which there are far too few to catch every instance. A machine, in theory, and increasingly in practice, could be far more efficient. As Wired explained, “If a computer was smart enough to spot cyberbullying as it happened, maybe it could be halted faster, without the emotional and financial costs that come with humans doing the job.”
The state of AI is already fairly advanced, though we are only at the beginning of revealing its potential. Today’s machines can drive cars, create recipes, diagnose diseases, and much, much more. Moderation is a comparatively simple job, so it’s no surprise that the technology is well on its way. In fact, numerous ventures are tackling this very prospect.
At SRI International, the birthplace of the virtual assistant Siri, researchers are developing smart software that can detect and flag online abuse quickly and accurately. One unnamed social media company has already approached them for help on this issue. The company collected and compiled a wealth of data to show the scope of the problem; SRI’s software, it hoped, would be able to curb cyberbullying of all types. This comes with complications: For example, scanning for curse words is easy, but understanding context is not. By feeding a machine many examples of bullying that have already been identified, it can learn to discern between a joke and a taunt, and identify patterns of abuse.
A prototype of this technology could be ready in six to twelve months, or sooner.
SRI isn’t the only company with anti-bullying AI on its radar. Sydney-based software development company KevTech Apps has a product called SafeKidsPro on the market already. It uses an algorithm that emulates brain function in order to identify the intent and severity of messages sent over time, and how it might make the recipient feel. It can notify parents if their child is the victim or perpetrator of bullying and identify predatory behavior like grooming or stalking.
Do we have hope?
Of course, the implementation of this kind of technology could simply lead to smarter and more cryptic bullies. It also raises some serious questions about censorship, and what constitutes bullying versus an unpopular opinion.
Regardless, Melania Trump could support initiatives that develop and test similar technology. As a tool, AI-fueled software would help families keep their kids safe and help companies curb hate speech. It could also help identify terrorist recruitment online, something even the President might find useful.
If you’re skeptical that it’s possible, consider President Obama, who set a precedent for the White House’s involvement in high-tech affairs. His administration made it clear in a report that policymakers should be involved in following the development of AI by promoting its use as a collaborative tool and reigning in its potential downsides. It remains to be seen if President Trump will share this priority, but if he wants to curb the consequences of pressing issues like job automation, he would certainly be wise to.
As for Melania, it may be a pipe dream to expect her to get into the nitty-gritty of cyberbullying, but politics show us time and time again that anything is possible. As she said in her speech, “as adults we are not merely responsible — we are accountable” for issues like cyberbullying affecting the world’s children. That means exploring all opportunities to make things better and using the best innovations available to promote kindness on the web and off — no matter what your husband is tweeting that day.
Debrah Lee Charatan is the cofounder, principal, and president of BCB Property Management.