Scientific misconduct — including fraud, suspected fraud and plagiarism — is the reason behind most retractions of papers published in scientific journals, a new study shows.
Only 21.3 percent of biomedical and life sciences studies pulled from scientific journals were withdrawn because honest errors invalidated the findings, researchers report online October 1 in the Proceedings of the National Academy of Sciences.
Retraction notices often don’t explain why a study is being withdrawn, or they cover up the real reason for pulling a paper, says study coauthor Arturo Casadevall, a microbiologist at Albert Einstein College of Medicine in New York City and editor of the journal mBio.
To understand the scope of the problem, Casadevall and coauthors Ferric Fang and R. Grant Steen studied 2,047 retracted journal articles in the PubMed database, which references more than 25 million studies dating back to the 1940s. Of the retractions, 67.4 percent were due to scientific misconduct, the new study shows.
The study “throws into high relief some trends we suspected were true,” says Ivan Oransky, a journalist and cofounder of the blog Retraction Watch, which digs into the reason for retractions and calls out particularly opaque and unhelpful notices. The researchers drew on Retraction Watch reports, investigations from the U.S. Office of Research Integrity and other sources to ferret out the real reasons papers are retracted.
One of those trends is that retractions are on the rise, in part because publishers have started using software to detect plagiarism and duplicate publication, resulting in papers being pulled for those reasons starting in 2005. Plagiarism accounted for 9.8 percent and duplicate publication for 14.2 percent of retractions. Journals also use computer programs to spot alterations to images submitted with scientists’ manuscripts, but technology isn’t likely to prevent misconduct, says Casadevall. “The better the filter, the better the fraud.”
If there is good news, it is that fraud doesn’t appear to be widespread, Casadevall says. Just 38 research groups were responsible for 43.9 percent of retractions for fraud or suspected fraud. But the repeat offenders often published multiple studies, building upon fraudulent data or simply republishing data multiple times.
“This kind of bursts the bubble that science is self-correcting, because fraudulent data won’t be repeated,” says Fang, who is chief editor of Infection and Immunity.
The culture of science may be to blame for a recent increase in fraud cases: Journal publications are widely used to gauge a scientist’s potential and success. “Misconduct is a phenomenon similar to doping in sport: It is essentially about gaining an unfair advantage over competitors,” says Daniele Fanelli of the University of Edinburgh. But a rise in retractions doesn’t mean that fraud is also increasing. “The fact that we went from zero retractions to 0.01 percent in a few decades is just an encouraging symptom of growing awareness of the problem.”
Pressure to publish in high-profile journals and bring in increasingly hard-to-get grant money breeds a climate ripe for wrongdoing. Such cut-throat competition is rife in countries fingered for fraud in the new study, including the United States, Germany, China and Japan, says Kalevi Korpela, a psychologist at the University of Tampere in Finland.
Although Fang and Casadevall say they worry that their study could be misused to erode public trust in science, sweeping misconduct under the rug would be even more harmful.
Recent high-profile cases of fraud show that “when people make up stuff, it’s usually important,” Casadevall says. He cites the case of Andrew Wakefield, who published a study in 1998 in the Lancet linking the measles, mumps and rubella vaccine to autism and intestinal disorders. That study was repeatedly discredited and found to be fraudulent, but nevertheless, that paper sparked an ongoing backlash against vaccination.