CHICAGO — Science’s power to benefit society depends on its reliability. You’re supposed to be able to take scientific results to the bank. Or to the clinic.
But more and more these days you hear reports about how efforts to reproduce experimental findings frequently fail. Published scientific reports often aren’t the reliable guide to knowledge that they have customarily been taken to be. Throughout the academic scientific world, this “reproducibility problem” is approaching the status of scandal, eliciting widespread concern and commentary. But it is far from merely of academic interest.
In fact, while leading an academic lab for two decades, Rita Balice-Gordon never thought much about the reproducibility problem. But now that she works for Pfizer Neuroscience, the reliability of basic science published in the research literature has become one of her chief concerns.
“Lack of reproducibility has a knockoff effect to drug discovery efforts,” she said October 17 at the annual meeting of the Society for Neuroscience. “I’ve been struck by how little of the literature can be reproduced in a robust and rapid way to enable that discovery process. And I think it’s a concern that is real and more than a perception.”
In her talk at the neuroscience meeting, Balice-Gordon surveyed the many reasons that have been offered to explain reproducibility failures. There’s poor experimental design, errors in methodology, mistakes in execution and errors in the analysis of the data. Many studies are too small to reliably detect the effect they seek, a problem known as “low statistical power.” Junior researchers often don’t get adequate mentoring, and oversight of experiments is often insufficient. Biases (conscious or subconscious) can lead to selective reporting of results. And experimental failures are filed away, either never submitted for publication or rejected by journals that just want successes.
“Studies that work are published,” Balice-Gordon said. “And studies that don’t, one never hears about again.”
Consequently the scientific literature is skewed toward positive but frequently wrong results, while the experiments that would reveal such wrongness remain unknown. All these factors conspire to produce a library of published studies that can’t be relied on, which in turn can lead other researchers on expensive wild-goose chases.
“There’s a growing appreciation in all sectors of science … of the tremendous cost of irreproducible research,” Balice-Gordon noted.
It’s not so simple, though, to figure out what to do about this situation. Balice-Gordon identified several subtleties that should be kept in mind as potential solutions are devised.
For one thing, failure to reproduce a study’s results doesn’t necessarily mean those results were wrong. It might have been the replication attempt that failed.
“The replication efforts themselves are prone to all of these issues,” Balice-Gordon pointed out. “Mistakes or inadequate expertise in replicating an experiment can result in someone throwing up their hands and saying ‘I can’t reproduce this finding.’” But the failure to replicate might “simply mean that the effect that one is trying to replicate depends on experimental details that are not readily recognized.”
Another key point to remember is that precise replication of a specific experiment’s result is not the most important goal, which is not mere replication of one finding, but identifying robust core conclusions.
“It’s not just about reproducing a particular result and getting the same … significance that the original lab did,” she said. It’s also about getting results that hold up under a variety of circumstances so as to be useful for further research.
“Effects that aren’t robust aren’t likely to be readily replicated, and thus are unlikely to be foundational for the next set of experiments … and in a drug discovery context are unlikely to be translated across different models,” Balice-Gordon said. And that means research based on such results is “unlikely to be successful in efforts to translate basic biology into the clinic.”
Efforts to address the reproducibility problem are clearly essential to maintaining science’s value to society. Scientific journals, funding agencies and others have already begun to examine the incentives that encourage rapid and prolific publication more than accurate and reproducible results. Many commenters have argued that scientists should be more willing to share their data with researchers attempting replications. And some experts advise that proposed experiments should routinely be described in advance so researchers can’t retroactively alter their methods (in order to fish for a publishable finding).
All those and other steps seem like good ideas. But Balice-Gordon warns that there could also be dangers in overaggressive efforts to combat irreproducibility.
“The flip side of the problem has begun to be appreciated,” she said. “Efforts to enhance reproducibility, while very well intentioned, themselves carry risks — including bogging down discovery, risks to scientific reputations and tinkering with a culture that is, for the most part, viewed as working.”
But is it really working? Sure, science still has many successes. If you can land a robot on a comet or take close-up snapshots of Pluto, you’ve got something going for you. But in the realm of medicine, where basic science ought to be providing a platform for seeking clinical advances, science has had less than stellar success lately. Modern understanding of molecular biology and genetics is immense. But progress in turning that knowledge into cures for modern diseases has been slow.
Science’s methods have, historically, produced fountains of useful knowledge fueling technological innovation and a general elevation of the state of civilization, including many benefits to human health and welfare. But the corruption and misuse of those methods in recent years is impairing science’s ability to serve society further.
“These topics have been widely acknowledged but not really dealt with in any systematic way,” Balice-Gordon said. It’s time now for a systematic strategy for dealing with these issues, and that would mean more than just talking about them.
Follow me on Twitter: @tom_siegfried