Citation inflation

The gold standard for assessing journal quality -- the impact factor -- is proving vulnerable to subtle biases

Many journals – and the authors who publish their novel data and analyses in them – rely on “impact factors” as a gauge of the importance and prestige of their work. These ratios quantify how often a journal’s papers were cited by peer-reviewed studies during the subsequent two years. And there’s been a presumption that the more a journal’s papers have been cited, the more important they must have been. However, a new analysis turns up subtle ways that journals can game the system to artificially inflate that impact factor.

Because the authors of the new study can’t read the intent of a journal’s editors, they can’t identify whether any particular publication tinkered with its policies to deliberately foster that inflation – or whether changes made for other purposes merely had that ancillary effect. What they can say is that many journals’ impact factors no longer accurately reflect what this yardstick was designed to measure.

An impact factor of 2 would indicate that on average, each paper published in the preceding two years had been cited twice in major peer-reviewed journals. To investigate claims of impact inflation, Bryan Neff of the University of Western Ontario, in London, and Julian Olden of the University of Washington, in Seattle, looked at all 70 peer-reviewed ecological journals that had consistently yielded an impact factor of at least 1 over the preceding five years. They used the Web of Science, one of the three primary journal-indexing sources, to download citation records for all papers published in the 70 journals between 1998 and 2007.

After excluding 14 outlier articles for their excessive citations (i.e. “falling more than seven standard deviations from the regression line”), Neff and Olden focused their study on the remaining 72,298 papers. Over the 10 year period, published works cited from 8 to 330 other papers, with an average of 52. But this rate has not been static. On average, they found, “papers have cited 0.67 additional papers each year.” Moreover, about 14 percent of citations, based on a random sampling of 100 publications, were to papers published in the preceding two years.

So newer papers tend to cite more publications – and proportionately more recent publications – than did the older journal articles, Neff and Olden report in the June BioScience. Together, these factors contributed most to impact inflation over the surveyed decade, they say.

The duo also looked at whether journals published review papers, which tend to be more heavily cited down the road than do single-study papers. “We found that there were strong positive relationships between the proportion of review papers published and the impact factor of the journal,” they say, “as well as the change in impact factor over the past five years.”

This suggests that impact factors “should control for the proportion of reviews published,” Neff and Olden argue, or should compute them separately for journals that publish reviews and those that do not. In fact, they contend that too many papers are essentially lazy and cite review papers rather than looking up and citing the original papers mentioned in the review. Citing this primary literature instead would “also help to alleviate the disproportionate effect reviews appear to have on impact factors,” they maintain.

Some editors may also encourage authors to cite recent papers that appeared in their publication, which would help boost impact factors, observes Alan Wilson of Auburn University in Alabama.

But one potentially important inflationary factor rests outside those editors’ hands: the growing proliferation of new journals. Impact calculations sort of assume that the pool of journals from which citations emerge will remain constant over time. In fact, that pool is increasing. And as the pool of potential citers expands, so does the likelihood that any given journal’s papers will be mentioned.

Sixty one of the 70 journals that Neff and Olden studied reported an increase in impact factor over the 10 years studied. However, the researchers found, only one-third of the total beat their calculated inflation rate of 0.23 per year. Four percent matched the inflation rate and 62 percent fell below it. Journals whose impact factors climbed faster than the rate of inflation often were those with the highest starting impact factor. It’s like the rich getting richer fastest.

Three years ago, Wilson investigated the inflation issue and found a link between journal financing and impact factor. Nonprofit journals tended to be oldest and have the lowest impact factors. For-profit journals or those published by nonprofit organizations but via a for-profit publisher had higher impact factors.

Subscription rates of for-profit journals tend to cost substantially more than for journals published by nonprofits. And because authors try to publish in the highest-impact journal that will take their papers, a pressure can develop to submit manuscripts to the more expensive journals, he said. Moreover, libraries across the nation have had to cut back on their journal budgets. Here too, impact factor may be developing into an increasingly distorted metric for deciding how they can get the biggest bang for their subscriptions bucks.

Neff and Olden cite other papers in recent years showing similar indications of impact-factor inflation in other fields, although their new rate for ecology journals seems to lead the pack. Clearly, the two argue, there must be a more equitable way to measure publishing value than this simple – and easy to manipulate – metric.

Janet Raloff is the Editor, Digital of Science News Explores, a daily online magazine for middle school students. She started at Science News in 1977 as the environment and policy writer, specializing in toxicology. To her never-ending surprise, her daughter became a toxicologist.

More Stories from Science News on Science & Society