Blame bad incentives for bad science

How a ‘publish or perish’ attitude may be derailing the scientific enterprise

stack of scientific journals

These days, a scientist has to publish a steady stream of research articles to be “successful.” But two new studies argue that that kind of pressure promotes sloppy science at the expense of careful work.

peresanz/Shutterstock

Most of us spend our careers trying to meet — and hopefully exceed — expectations. Scientists do too. But the requirements for success in a job in academic science don’t always line up with the best scientific methods. The net result? Bad science doesn’t just happen — it gets selected for.

What does it mean to be successful in science? A scientist gets a job and funding by publishing a lot of high-impact papers with novel findings. Those papers and findings beget awards and funding to do more science — and publish more papers. “The problem that we face is that the incentive system is focused almost entirely on getting research published, rather than on getting research right,” says Brian Nosek, a psychologist at the University of Virginia in Charlottesville.

This idea of success has become so ingrained that scientists are even introduced when they give talks by the number of papers they have published or the amount of grant funding they have, says Marc Edwards, a civil engineer at Virginia Polytechnic Institute and State University in Blacksburg.

But rewarding researchers for the number of papers they publish results in a “natural selection” of sloppy science, new research shows. The idea of scientific “success” equated as number of publications promotes not just lazy science but also unethical science, another paper argues. Both articles proclaim that it’s time for a culture shift. But with many scientific labs to fund and little money to do it, what does a new, better scientific enterprise look like?

As young scientists apply for tenure-track academic jobs, they may bring an application filled with tens to dozens of papers.  Hiring committees can often no longer read or evaluate all of them. So they may come to use numbers as shorthand — numbers of papers published, how many times those papers have been cited and whether the journals the papers are published in are high-impact. “Real evaluation of scientific quality is as hard as doing the science in the first place,” Nosek says. “So, just like everyone else, scientists use heuristics to evaluate each other’s work when they don’t have time to dig into it for a complete evaluation.”

Too much reliance on the numbers means that scientists can — unintentionally or not — game the system. They can publish novel results from experiments with low power and effort. Those novel results inflate publication numbers, increase grant funding and get the scientist a job. Ideally, other scientists would catch this careless behavior in peer review, before the studies are published, weeding out poorly done studies in favor of strong ones. But Paul Smaldino, a cognitive scientist at the University of California, Merced, suspected that when the scientific idea of “meeting expectations” on the job is measured in publication rates, bad science would always win out.

So Smaldino and his colleague Richard McElreath at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, decided to create a computer simulation of the scientific “ecosystem,” based on a model for natural selection in a biological ecosystem. Each “lab” in the simulation was represented by a number. Those labs that best met the parameters for success survived and reproduced, spawning other labs that behaved in the same way. Those labs that didn’t meet expectations “died out.”

The model allowed Smaldino and McElreath to manipulate the definitions of “success.” And when that success was defined as publishing a lot of novel findings, labs succeeded when they did science that was “low effort” — sloppy and probably irreproducible. Research groups doing high-effort, careful science didn’t publish enough. And they went the way of the dinosaurs.

Even putting an emphasis on replication — in which labs got half credit for double-checking the findings of other groups — couldn’t save the system. “That was a surprise for us,” Smaldino says. He assumed that if the low-effort labs got caught by failures to replicate, their success would go down. But scientists can’t replicate every single study, and in the simulation, the lazy labs still thrived. “The most successful are still going to low effort,” he explains, “because not everyone gets caught.” Smaldino and McElreath published their findings September 21 in Royal Society Open Science.

“I think the results they get are probably reasonable,” says John Ioannidis, a methods researcher at Stanford University in California. “Once you have bad practices they can propagate and ruin the scientific process and become dominant. And I think there’s some truth to it, unfortunately.”

The publish-or-perish culture may be having negative consequences already, Edwards says. “I’ve … seen ethical researchers leave academia, not enter in the first place or become unethical,” he says. Scientists might slice their research findings thinner, trying to publish more findings with less data, breaking experiments down to the least publishable unit. That in itself is not unethical, but Edwards worries the high stakes places scientists on the edge of a slippery slope, from least publishable units to sliced-and-diced datasets. “With the wrong incentives you can make anyone behave unethically, and academia is no different.”

Using a theoretical model of his own, Edwards and his colleague Siddhartha Roy show that, at some point, the current academic system could lead a critical mass of scientists to cross the line to unethical behavior, corrupting the scientific enterprise and losing the public’s trust. “If we ever reach this tipping point where our institutions become inherently corrupt, it will have devastating consequences for humanity,” Edwards says. “The fate of the world depends as never before on good trustworthy science to solve our problems. Will we be there?” Edwards and Roy report their model September 22 in Environmental Engineering Science.

To stay away from the slippery slope, scientists will need to change what scientific success looks like. Here’s the rub, though. Scientists are the primary people watching scientists work. When papers go through peer review at scientific journals, ideas get examined in peer-review committees for grant funding or a scientist is being considered for an academic job, it’s other scientists who are guarding those gates to scientific success. A single scientist might be publishing papers, peer-reviewing other peoples’ papers, submitting grants, serving on review committees for other peoples’ grants, editing a journal, applying for a job and serving on a hiring committee — all at the same time. And so the standards for scientific integrity, for rigorous methods, do not reside with the institutions or the funders or the journals. Those standards are within the scientists themselves. The inmates really do run the scientific asylum.

This is not an inherently bad thing. Science needs people with appropriate expertise to read the highly specialized stuff. But it does mean that a movement for culture change needs to come from within the scientific enterprise itself. “This is more likely to happen if you have a grassroots movement where lots of scientists are convinced and are used to performing research in a given way, leading to more reliable results,” Ioannidis says.

What produces more reliable research, though, still requires … research. “I think these are questions that could be addressed with scientific studies,” Ioannidis says. “This is where I’m interested in taking the research, to get studies that are telling us to [do science] this way, [or] this type of leadership is better…. You can test policies.” Science needs more studies of science.  

The first step is admitting that problems exist in the current structure. “We’re bought into it  — we invested our whole career into the game as it exists,” Edwards says. “We are taught to be cowards when it comes to addressing these issues, because the personal and professional costs of revealing these problems is so high.” It can be painful to see sloppy science exposed. Especially when that science is performed by colleagues and friends.  But Edwards says fixing the system will be worth the pain. “I don’t want to wake up someday and realize I’m in a culture akin to professional cycling, where you have to cheat to compete.”

The solution is to add incentives for having an excellent research process, regardless of outcome, Nosek says. Scientists need to be rewarded, funded and promoted for careful, thorough research — even if it doesn’t produce huge differences and groundbreaking results.  Nosek points to ideas like registered reports. These are systems where scientists report their experimental plans and methods to a journal, and the journal accepts the paper — whether or not the research produces any noteworthy results.

Despite his results, Smaldino is optimistic that incentives can change, allowing the best science to rise to the top. “I think science is great,” he says. “I think in general scientists aren’t bad scheming people.” The dire predictions of the models don’t have to come to pass. “This is not a condemnation of science,” Smaldino says. “I love science — there’s no other way to learn a lot of things that are important to learn about the world. But the science we do can always be better.” 

Bethany was previously the staff writer at Science News for Students. She has a Ph.D. in physiology and pharmacology from Wake Forest University School of Medicine.

More Stories from Science News on Science & Society

From the Nature Index

Paid Content