In science, popularity breeds unreliability

Newsworthiness does not mean worthy science, especially in hot research fields

newspaper boxes

Coverage in the news media doesn’t necessarily indicate the quality of scientific research. And the best scientific research doesn’t always get reported in the media.

CatLane/iStockPhoto

Popularity isn’t everything it’s puffed up to be.

“Avoid popularity,” William Penn advised. “It has many snares, and no real benefit.”

It’s a sentiment that scientists should appreciate.

In recent decades, pressure to publish and seek publicity has distorted the scientific enterprise. Scientists need grants. Getting published raises the likelihood of getting those grants. Journals want to publish papers that will be popular and get a lot of media attention. So scientists pursue research that will be popular enough to get published.

Thirty years ago, you could count on one hand the number of medical and scientific journals seeking media coverage by sending journalists advance notice of articles to be published. Nowadays you couldn’t count them unless you had nothing else to do for a week or two. Science and medical journalists feed from an overflowing trough. Not to mention bloggers. So publicity and popularity for science is easier to achieve than ever before.

Now for the snare. As with novels and movies, the popular stuff isn’t necessarily the best stuff. News media coverage of science and medicine does not generally correlate with scientific quality.

In a study published this year in PLOS ONE,  Senthil Selvaraj of Brigham and Women’s Hospital and collaborators selected 75 medical journal articles covered in five high-circulation newspapers and compared them with 75 papers selected from the five journals with the highest citation rates. The researchers found that newspapers preferentially covered observational studies rather than the supposedly more rigorous randomized controlled clinical trials. Rating study methodology on a 1 to 5 scale, the researchers found that 40 percent of the journal papers they examined scored a 1 (best); among the studies reported in newspapers, only 17 percent got the top rating.

OK, it’s a pretty flimsy study. All it really shows, if anything, is that newspapers have different publication criteria than medical journals. As they should. Newsworthiness and scienceworthiness are not the same thing. But there is a point somewhere in here that this study almost illustrates — that the popular science and medicine reports, those that get media coverage, may not be the most reliable sources of scientific evidence.

In fact, as I’ve written elsewhere, many of the criteria that confer newsworthiness on a scientific report tend to skew coverage toward results that are unlikely to stand up to future scrutiny. Journalists like to write stories about findings that are “contrary to previous belief,” for instance. But such findings are often bogus, at least in cases where the “previous belief” was based on actual sound scientific evidence. There is usually no reason why a more recent report should be taken as more likely to be true than previous reports.

Journalists also prefer to write about the “first report” of a finding. First reports are notoriously unreliable. Effects in first reports are commonly exaggerated or even wrong. In science, the second and subsequent reports confirming a finding are the keys for advancing knowledge. But journalists typically ignore second reports, as they are not, by definition, news.

A possible exception may be granted, though, for subsequent studies in hot research fields, where research is of high public interest and many labs are racing to make dramatic discoveries. Here is where popularity seriously ensnares science in a statistical trap. With many labs working on a problem, more statistical flukes will turn up. Those flukes, supposedly showing a positive result, are rushed to publication. Searches that don’t find anything typically either don’t get submitted for publication or are rejected. And so journalists writing about hot fields are given a statistically skewed sample of mostly wrong results to begin with.

This popularity problem for science exists internally, even when journalists don’t get involved. A study  published in PLOS ONE in 2009 found, for instance, that studies identifying protein-protein interactions in yeast suffer from a peculiar popularity problem. The more popular the supposed interaction — measured by how often it was mentioned in the scientific literature — the more likely it turned out to be a mistake.

“We find that individual results on yeast protein interactions as published in the literature become less reliable with increasing popularity of the interacting proteins,” wrote Thomas Pfeiffer of Harvard University and Robert Hoffmann of MIT. Their analysis showed that the probability that a reported interaction would be confirmed by follow-up studies dropped by 50 percent for the more popular proteins.

In other words, popular research is less reliable. And that’s not a good sign in an era when scientists are driven to seek popularity in order to survive.

Follow me on Twitter: @tom_siegfried

Tom Siegfried is a contributing correspondent. He was editor in chief of Science News from 2007 to 2012 and managing editor from 2014 to 2017.

More Stories from Science News on Math

From the Nature Index

Paid Content