How Twitter bots get people to spread fake news | Science News

SUPPORT SCIENCE NEWS

Help us keep you informed.

Real Science. Real News.


News

How Twitter bots get people to spread fake news

One tactic of automated accounts is to target people with many followers

By
11:00am, November 20, 2018
person holding phone illustrated with twitter notifications

FRAUDULENT BOTS  On Twitter, automated accounts called bots duped many human users into spreading misinformation during the 2016 U.S. presidential election.

Sponsor Message

To spread misinformation like wildfire, bots will strike a match on social media but then urge people to fan the flames.

Automated Twitter accounts, called bots, helped spread bogus articles during and after the 2016 U.S. presidential election by making the content appear popular enough that human users would trust it and share it more widely, researchers report online November 20 in Nature Communications. Although people have often suggested that bots help drive the spread of misinformation online, this study is one of the first to provide solid evidence for the role that bots play.

The finding suggests that cracking down on devious bots may help fight the fake news epidemic (SN: 3/31/18, p. 14).

Filippo Menczer, an informatics and computer scientist at Indiana University Bloomington, and colleagues analyzed 13.6 million Twitter posts from May 2016 to March 2017. All of these messages linked to articles on sites known to regularly publish false or misleading information. Menczer’s team then used Botometer, a computer program that learned to recognize bots by studying tens of thousands of Twitter accounts, to determine the likelihood that each account in the dataset was a bot.

Unmasking the bots exposed how the automated accounts encourage people to disseminate misinformation. One strategy is to heavily promote a low-credibility article immediately after it’s published, which creates the illusion of popular support and encourages human users to trust and share the post. The researchers found that in the first few seconds after a viral story appeared on Twitter, at least half the accounts sharing that article were likely bots; once a story had been around for at least 10 seconds, most accounts spreading it were maintained by real people.

“What these bots are doing is enabling low-credibility stories to gain enough momentum that they can later go viral. They’re giving that first big push,” says V.S. Subrahmanian, a computer scientist at Dartmouth College not involved in the work.

The bots’ second strategy involves targeting people with many followers, either by mentioning those people specifically or replying to their tweets with posts that include links to low-credibility content. If a single popular account retweets a bot’s story, “it becomes kind of mainstream, and it can get a lot of visibility,” Menczer says.

These findings suggest that shutting down bot accounts could help curb the circulation of low-credibility content. Indeed, in a simulated version of Twitter, Menczer’s team found that weeding out the 10,000 accounts judged most likely to be bots could cut the number of retweets linking to shoddy information by about 70 percent.

Bot and human accounts are sometimes difficult to tell apart, so if social media platforms simply shut down suspicious accounts, “they’re going to get it wrong sometimes,” Subrahmanian says. Instead, Twitter could require accounts to complete a captcha test to prove they are not a robot before posting a message (SN: 3/17/07, p. 170).

Suppressing duplicitous bot accounts may help, but people also play a critical role in making misinformation go viral, says Sinan Aral, an expert on information diffusion in social networks at MIT not involved in the work. “We’re part of this problem, and being more discerning, being able to not retweet false information, that’s our responsibility,” he says.

Bots have used similar methods in an attempt to manipulate online political discussions beyond the 2016 U.S. election, as seen in another analysis of nearly 4 million Twitter messages posted in the weeks surrounding Catalonia’s bid for independence from Spain in October 2017. In that case, bots bombarded influential human users — both for and against independence — with inflammatory content meant to exacerbate the political divide, researchers report online November 20 in the Proceedings of the National Academy of Sciences.

These surveys help highlight the role of bots in spreading certain messages, says computer scientist Emilio Ferrara of the University of Southern California in Los Angeles and a coauthor of the PNAS study. But “more work is needed to understand whether such exposures may have affected individuals’ beliefs and political views, ultimately changing their voting preferences.”

Citations

C. Shao et al. The spread of low-credibility content by social bots. Nature Communications. Published online November 20, 2018. doi: 10.1038/s41467-018-06930-7.

M. Stella, E. Ferrara and M. De Domenico. Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences. Published online November 20, 2018. doi:10.1073/pnas.1803470115.

Further Reading

M. Temming. People are bad at spotting fake news. Can computer programs do better? Science News. Vol. 194, August 4, 2018, p. 22.

M. Temming. On Twitter, the lure of fake news is stronger than the truthScience News. Vol. 193, March 31, 2018, p. 14.

E. Engelhaupt. You’ve probably been tricked by fake news and don’t know it. Science News Online, December 4, 2016.

I. Peterson. Games theory. Science News. Vol. 171, March 17, 2007, p. 170.

Get Science News headlines by e-mail.

More from Science News

From the Nature Index Paid Content