People are bad at spotting fake news. Can computer programs do better?
There’s just too much misinformation online for human fact-checkers to catch it all
Scrolling through a news feed often feels like playing Two Truths and a Lie.
Some falsehoods are easy to spot. Like reports that First Lady Melania Trump wanted an exorcist to cleanse the White House of Obama-era demons, or that an Ohio school principal was arrested for defecating in front of a student assembly. In other cases, fiction blends a little too well with fact. Was CNN really raided by the Federal Communications Commission? Did cops actually uncover a meth lab inside an Alabama Walmart? No and no. But anyone scrolling through a slew of stories could easily be fooled.
We live in a golden age of misinformation. On Twitter, falsehoods spread further and faster than the truth (SN: 3/31/18, p. 14). In the run-up to the 2016 U.S. presidential election, the most popular bogus articles got more Facebook shares, reactions and comments than the top real news, according to a BuzzFeed News analysis.
Before the internet, “you could not have a person sitting in an attic and generating conspiracy theories at a mass scale,” says Luca de Alfaro, a computer scientist at the University of California, Santa Cruz. But with today’s social media, peddling lies is all too easy — whether those lies come from outfits like Disinfomedia, a company that has owned several false news websites, or a scrum of teenagers in Macedonia who raked in the cash by writing popular fake news during the 2016 election.