Sometimes it’s best to feed the trolls

Responding to online rants and insults can change behavior, data show

Miss America 2014 Nina Davuluri

After Nina Davuluri was crowned Miss America in 2014, the Twittersphere was abuzz with disparaging (and sometimes erroneous) remarks about her ethnicity and religion. Community attempts to correct such tweets were met with some corrections and apologies.

Lev Radin/Shutterstock

If you’ve been to the Internet, you’ve probably encountered a troll. That’s the nickname given to the people behind nasty or inflammatory posts in online outlets. Trolls seem to revel in sowing discord, provoking and tormenting other readers. “Don’t feed the trolls” is often considered the best response for dealing with such commenters, and data suggest that it’s effective: A recent Pew Research Center survey found that of people who did nothing in response to an incident of online harassment, 83 percent felt the ignoring tactic worked.

But the same survey found that of people who responded to the harassment, 75 percent felt that the tactic of engaging with the harasser worked. These intriguing data fit with several emerging lines of evidence suggesting that responding to online ranting can influence the behavior of the ranter. While censorship can further rile trolls, the right kind of reaching out can spur repentance. Sometimes, it’s good to feed the trolls. 

In a talk at Harvard’s Berkman Center for Internet & Society last spring, American University’s Susan Benesch highlighted some of this emerging research exploring how countering inflammatory speech on social media with more speech can change the discourse. (For a summary of her talk, see this post by Ethan Zuckerman, director of the Center for Civic Media at MIT.) Benesch pointed to several instances where people’s responses to inflammatory speech ballooned into a larger civic shaming that resulted in apologies and a desire by the speaker to take back their words.

After Nina Davuluri was crowned Miss America in 2014, for example, the Twittersphere was abuzz with disparaging remarks, including comments that managed to insult several groups at once by calling her an Arab and Muslim as if that were a bad thing (her parents are from India). Tweeters responded to the hate with posts like “One day I hope you realize how shameful this tweet is,” and “Don’t just hate her for her skin color, she is American like anybody else.” The engagement seemed to have an effect. Benesch noted that a few hours after his initial inflammatory post, one tweeter recanted: “@MissAmerica sorry for being rude and “racist” and calling you an Arab please tweet back so everyone will know its real” (sic).

Benesch also shared results from her own efforts in Kenya, where she worked with data scientists to build Umati (from the Swahili word for crowd), a project to monitor inflammatory speech in the weeks surrounding the March 2013 Kenyan presidential election. The researchers noticed that there was far more inflammatory speech on Facebook than on Twitter. Diving into the data revealed that tweets deemed unacceptable were openly shunned, which may have kept the inflammatory speech at bay. When Kenyans on Twitter (#KOT) called out the hateful remarks on the social media platform — a phenomenon the researchers dubbed “KOT cuffing” — trolls often backed down. One inflammatory tweeter responded with, “Sorry guys, what I said wasn’t right and I take it back, lesson learned.”

Some of the most interesting data on trolling comes from efforts by Jeffrey Lin, a neuroscientist by training and lead game designer of social systems at Riot Games. Lin and his colleagues wanted to encourage players of the online game League of Legends to play nice. So they did some experiments. In one instance, Lin and his team set things up so banned players were sent reform cards showing the evidence that led to their penalties, and players could share these cards with the community. When some banned players posted their ban notices, the community responded by pointing out the banned players’ toxic behavior. Once they were allowed to play again, more than 70 percent of players that got a reform card never offended again.  Some inflammatory players also e-mailed their regrets to the design team, including one that said: “This is the first time someone told me that you should not say the ‘N’ word online. I am sorry and I will never say it again.”

The gaming data suggested that roughly half of the toxic messages weren’t from trolls, but from those who might be better described as people who were having a bad day and lashed out.

These results all raise an important point: trolls are people. Like many groups of people, they aren’t homogenous, but have a variety of world views, intentions and goals. (This taxonomic diversity is why people who study trolls often define them by their behavior; there isn’t a single category of person that is Troll). Online harassment that is truly scary and dangerous exists. But there has always been scary and dangerous speech; the Internet just makes us privy to it in a new, and very public, way. (Benesch notes that many of us might have never heard someone tell a rape joke before the Internet, but that doesn’t mean those jokes weren’t being told). These early data suggest that when dealing with hateful speech, Supreme Court Justice Louis Brandeis’s words from long before the Internet still hold: the remedy to be applied is more speech, not enforced silence.

While some of the emerging evidence is anecdotal, the data do suggest that rather than responding in a way that escalates anger by censoring troll speech, a public reprimand might work better in some cases. What responses work best and in what platforms and communities needs to be explored further. Some research shows, for example, that negative feedback to nasty posts on news sites that takes the form of an up-or-down vote actually encourages trolls to post more. Those researchers have also found that censoring can exacerbate antisocial behavior; unfair banning of commenters by moderators on news sites seemed to foment toxic posting.

Benesch also notes that efforts to change the mind of extreme haters online are probably futile. But shifting community norms often isn’t about silencing extremists, it’s about influencing a critical mass — “the malleable middle.” And that’s where a lot of trolls seem to live.

So the next time you read a hateful remark, try reaching out. Feed that troll. Some of them are just people who are hangry.

Editor’s note: This story was updated on April 27, 2015 to correct a typo in the fifth paragraph. Unacceptable tweets, not acceptable ones, were openly shunned.

More Stories from Science News on Science & Society