Study finds bias in peer review

Researchers have found evidence of bias when scientists review data and the researcher’s name and affiliation are available to the reviewers.

The survey focused on some 67,000 research abstracts submitted to the American Heart Association (AHA) between 2000 and 2004. Experts in the field annually review the abstracts and deem about 30 percent of them acceptable for presentation at the organization’s annual meeting.

Beginning in 2002, AHA changed its review process so that authors’ names and affiliations were stripped from abstracts before they were sent out for peer review. Joseph S. Ross of the Yale University School of Medicine and his colleagues now report that the change triggered major shifts in which categories of authors were most likely to have their abstracts accepted.

For instance, during 2000 and 2001, abstracts from U.S. authors were 80 percent more likely to be accepted than were those from non-U.S. authors. After blinding, the U.S.-based papers were only 41 percent more likely to be accepted, Ross’ team reports in the April 12 Journal of the American Medical Association. Similarly, the share of abstracts from faculty at highly regarded U.S. research universities dropped by about 20 percent, after blinding. For authors in government agencies, the acceptance rate fell by 30 percent.

Although the study focused on abstract acceptance at one organization’s scientific meeting, there’s no reason to assume the same thing doesn’t happen at other meetings or in other disciplines, the authors say.

Janet Raloff is the Editor, Digital of Science News Explores, a daily online magazine for middle school students. She started at Science News in 1977 as the environment and policy writer, specializing in toxicology. To her never-ending surprise, her daughter became a toxicologist.

More Stories from Science News on Humans

From the Nature Index

Paid Content