Skip to main content

Twitter is partnering with academics to curb hate speech, polarized discourse

Twitter's logo on display in San Francisco, California.
Twitter's logo on display in San Francisco, California.
Image Credit: Ken Yeung / VentureBeat

Twitter is providing support to two groups of academics looking to study the prevalence of some of the social platform’s most problematic content — as well as how Twitter can make people more open to different viewpoints.

In March, Twitter put out a request for proposals, asking academics what kind of metrics the company should use to determine how healthy the discourse on Twitter is. Twitter looked for researchers who were willing to produce “peer-reviewed, publicly available, open-access research articles and open source software whenever possible.” The company said it received 230 proposals, and organized a review committee consisting of individuals from a variety of departments within Twitter to judge the proposals.

The company announced today that it settled on two proposals to support. Both groups of researchers will receive access to public data and funding from Twitter, though a company spokesperson declined to say how much funding.

One, led by a researcher at Leiden University in the Netherlands, will come up with metrics to determine the prominence of echo chambers and uncivil discourse on Twitter. With regards to uncivil discourse, the researchers are looking to create algorithms that can distinguish between “incivility” and “intolerance” in Twitter conversations — the researchers define the latter as “hate speech, racism, and xenophobia,” while incivility is dialogue that “breaks the norms of politeness.”


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


The second study, led by a pair of researchers at The University of Oxford and the University of Amsterdam, will look at how interacting with people on Twitter who represent a wide variety of backgrounds and perspectives can potentially decrease prejudice and discrimination among users.

Previous research has shown that different political groups have very little interaction with one another on Twitter. Of course, some users may have become polarized in the first place because social platforms like Twitter and Facebook use algorithms to steer users towards more of the content and users that they are likely to interact with the most — often, ones who share their same political and social views.

Additionally, the success of these studies will be determined by whether or not Twitter actually listens to feedback from the researchers and uses it to make meaningful changes to the platform. The company has spent the past few months highlighting changes that it believe will lead to a “healthier conversation” on Twitter, such as acquiring an anti-spam startup called Smyte that has developed review tools to catch spam and hateful content, and limiting the visibility of tweets from trolls and bullies.