There’s been a lot of talk about fake news running rampant online, but now there’s data to back up the discussion.
An analysis of more than 4.5 million tweets and retweets posted from 2006 to 2017 indicates that inaccurate news stories spread faster and further on the social media platform than true stories. The research also suggests that people play a bigger role in sharing falsehoods than bots.
These findings, reported in the March 9 Science, could guide strategies for curbing misinformation on social media. Until now, most investigations into the spread of fake news have been anecdotal, says Filippo Menczer, an informatics and computer scientist at Indiana University Bloomington not involved in the work. “We didn’t have a really large-scale, systematic study evaluating the spread of misinformation,” he says.
To study rumormongering trends on Twitter, researchers examined about 126,000 tweet cascades — families of tweets composed of one original tweet and all the retweets born of that original post. All of those cascades centered on one of about 2,400 news stories that had been verified or debunked by at least one fact-checking organization.
Deb Roy, a media scientist at MIT, and colleagues investigated how far and fast each cascade spread. Discussions of false stories tended to start from fewer original tweets, but some of those retweet chains then reached tens of thousands of users, while true news stories never spread to more than about 1,600 people. True news stories also took about six times as long as false ones to reach 1,500 people. Overall, fake news was about 70 percent more likely to be retweeted than real news.
Roy and colleagues initially removed the activity of automated Twitter accounts called bots from the analysis. But when bot traffic was added back into the mix, the researchers found that these computer programs spread false and true news about equally. This finding indicates that humans, rather than bots, are primarily to blame for spreading fake news on the platform.
People may be more inclined to spread tall tales because these stories are perceived to be more novel, says study coauthor Soroush Vosoughi, a data scientist at MIT. Compared to the topics of true news stories, fake news topics tended to deviate more from the tweet themes users were exposed to in the two months before a user retweeted a news story. Tweet replies to false news stories also contained more words indicating surprise.
It’s not entirely clear what kinds of conversations these stories sparked among users, as the researchers didn’t inspect the full content of all the posts in the dataset. Some people who retweeted fake news posts may have added comments to debunk those stories. But Menczer says the analysis still provides a “very good first step” in understanding what kinds of posts grab the most attention.
The study could help guide strategies for fighting the spread of fake news, says Paul Resnick, a computational social scientist at the University of Michigan in Ann Arbor who was not involved in the work. For instance, the finding that humans are more liable to retweet falsehoods than bots may mean that social media platforms should focus on discouraging humans from spreading rumors, rather than simply booting off misbehaved bots.
To help users identify true stories online, social media sites could label news pieces or media outlets with veracity scores — similar to how grocery stores and food producers offer nutrition facts, says study coauthor Sinan Aral, an expert on information diffusion in social networks at MIT. Platforms also could restrict accounts reputed to spread lies. It’s still unclear how successful such interventions might be, Aral says. “We’re barely starting to scratch the surface on the scientific evidence about false news, its consequences and its potential solutions.”