Experimental results that don’t hold up to replication have caused consternation among scientists for years, especially in the life and social sciences (SN: 1/24/15, p. 20). In 2015 several research groups examining the issue reported on the magnitude of the irreproducibility problem. The news was not good.
Results from only 35 of 97 psychology experiments published in three major journals in 2008 could be replicated, researchers reported in August (SN: 10/3/15, p. 8). The tumor-shrinking ability of the cancer drug sunitinib was overestimated by 45 percent on average, an analysis published in October showed (SN: 11/14/15, p. 17). And a report in June found that, in the United States alone, an estimated $28 billion is spent annually on life sciences research that can’t be reproduced (SN: 7/11/15, p. 5).
There are many possible reasons for the problem, including pressure to publish, data omission and contamination of cell cultures (SN Online: 7/2/15; SN: 2/7/15, p. 22). Faulty statistics are another major source of irreproducibility, and several prominent scientific journals have set guidelines for how statistical analyses should be conducted. Very large datasets, which have become common in genetics and other fields, present their own challenges: Different analytic methods can produce widely different results, and the sheer size of big data studies makes replication difficult.
Perfect reproductions might never be possible in biology and psychology, where variability among and between people, lab animals and cells, as well as unknown variables, influences the results. But several groups, including the Science Exchange and the Center for Open Science, are leading efforts to replicate psychology and cancer studies to pinpoint major sources of irreproducibility.
Although there is no consensus on how to solve the problem, suggestions include improving training for young scientists, describing methods more completely in published papers and making all data and reagents available for repeat experiments.