Behavioral research may overstate results

Analysis implicates ambiguous methods and publish-or-perish culture in the United States

Here’s a hard pill to swallow for practitioners of “soft” sciences: Behavioral studies statistically exaggerate findings more often than investigations of biological processes do, especially if U.S. scientists are involved, a new report finds.

The inflated results stem from there being little consensus about experimental methods and measures in behavioral research, combined with intense publish-or-perish pressure in the United States, say evolutionary biologist Daniele Fanelli of the University of Edinburgh and epidemiologist John Ioannidis of Stanford University. Without clear theories and standardized procedures, behavioral scientists have a lot of leeway to produce results that they expect to find, even if they’re not aware of doing so, the researchers conclude Aug. 26 in the Proceedings of the National Academy of Sciences.

“U.S. studies in our sample overestimated effects not because of a simple reluctance of researchers to publish nonsignificant findings, but because of how studies were conceived and carried out,” Fanelli says.

The new study appears as psychologists consider ways to clean up research practices (SN: 6/1/13, p. 26).

“Sadly, the general finding about U.S. science sounds rather plausible,” remarks psychologist Hal Pashler of the University of California, San Diego.

Fanelli and Ioannidis examined the primary findings of 1,174 studies that appeared in 82 recently published meta-analyses. A meta-analysis weighs and combines results from related studies to estimate the true effect in a set of reported findings (SN: 3/27/10, p. 26).

The researchers chose psychological and other behavioral studies that examined impulsive acts or other deeds that can be measured with different scales or instruments. Studies in genetics and several other nonbehavioral fields investigated unambiguous outcomes, such as death. Biobehavioral studies from neurology and a few other areas probed a combination of biological and behavioral effects.

The studies’ authors primarily came from the United States, Europe and Asia.

Of the three study types, individual behavioral studies were most likely to report effects greater than those calculated in associated meta-analyses. Behavioral studies with a lead author in the United States showed an especially strong tendency to find what researchers had predicted before performing the research.

Biobehavioral studies displayed a smaller “U.S. effect.” No such tendency characterized nonbehavioral investigations, in which findings differed from those of meta-analyses mostly due to the use of samples that were unrepresentative of populations being studied, the researchers say.

It’s doubtful that the U.S. effect reflects a superior ability of U.S. scientists to formulate correct hypotheses, the researchers add. Fanelli and Ioannidis accounted for the studies’ differing choices of hypotheses across fields and for whether researchers recruited large samples of participants, which makes it easier to detect true effects.

Bruce Bower has written about the behavioral sciences for Science News since 1984. He writes about psychology, anthropology, archaeology and mental health issues.

More Stories from Science News on Psychology

From the Nature Index

Paid Content