Web edition: September 13, 2009
VANCOUVER, B.C. — On September 12, researchers reported uncovering a type of latent epidemic amnesia among certain biomedical scientists. The at-risk population: researchers testing new therapies on large groups of people. The chief symptom: Affected researchers write up their findings but neglect to put them into context by mentioning earlier human trials on the same topic.
Implications: Studies exhibiting this citation amnesia (a term apparently coined decades ago by Robert Merton of Columbia University) risk diminishing the apparent “weight of evidence” that’s already accumulated on how good, bad or limited a therapy is.
If amnesic authors were truly unaware of earlier work by others, they may unnecessarily have spent funds — and exposed human subjects to potentially risky therapies — in the pursuit of “establishing” what ostensibly had already been known. If these authors were merely feigning amnesia, they could be fostering the impression that they’re trailblazers when in fact they are anything but.
“Clinical trials [meaning those involving people] should not be started or interpreted without consideration of prior trials that addressed the same or similar questions,” maintain Karen Robinson and Steven Goodman of Johns Hopkins University. At the International Congress on Peer Review and Biomedical Publication, here in Vancouver, they described their initial foray into systematically assessing the extent to which published reports of clinical trials have been citing relevant prior trials.
The pair searched an Internet archive known as the Web of Science for meta-analyses available in 2004 that had investigated groups of clinical trials on a common topic. It might have been a dermatology treatment, a heart-surgery procedure, or even a psychiatric therapy. In all, they turned up 227 such meta-analyses which described 1,523 separate clinical trials in 19 different fields.
After tracking down the original papers describing the trials mentioned in the meta-analyses, Robinson quantified the extent to which each trial had cited earlier ones and coined a term for the resulting figure: the Prior Research Citation Index, or PRCI.
To be included in the Hopkins survey, each new-ish trial had to have followed at least three earlier ones. A few trials had been preceded by as many as 58.
Perhaps the most disturbing stat to emerge from this analysis: Of 1,101 papers for which there had been at least five priors available to cite, 46 percent acknowledged the existence of no more than one previous trial. Many papers in this group actually cited none of the available priors. “On average,” Robinson says, “the trials were reporting about 21 percent of the prior trials looking at the same question.”
Some participants at the meeting wanted to know whether a prior cite was more likely to be for some systematic review of research in its area. The answer: Not really.
Perhaps the studies preferentially pointed to the biggest prior trials, someone said — ones that might be statistically strongest. Robinson had analyzed this and yes, she acknowledged, the bigger trials were slightly more likely to be cited. To probe the significance of this, she quantified the share of earlier trial participants these citations accounted for. “And we found that about 33 percent of the study participants in prior trials were represented,” if she analyzed the priors that way. Overall, it looks slightly better, she says. “But it means that 67 percent of earlier participants were still not represented.”
Some journal editors participating at today’s meeting noted that they looked for novelty in a submitted manuscript. And some of the trial reports that the Hopkins researchers investigated claimed novelty — that they were the first to investigate something. Among the 15 papers with the highest PRCI, one made such a claim of being first; four among the 15 reports with the lowest PRCI made that claim. But of course, Robinson notes, “none of these were the first.”
She also turned up what I would call the bandwagon effect. Once a trial got cited in somebody’s bibliography, the odds of it happening again doubled.
At least one scientist in attendance at the meeting posited a benign explanation for the seemingly rampant citation amnesia that had been quantified. He’d found that journal editors have been imposing very tight page constraints on the length of a published paper. To make some of his manuscripts fit, he said he’d had to choose between explaining new data or including a full list of citations to related work. He chose to ditch some of those cites.
Garfield, E. 1991. Bibliographic Negligence: A Serious Transgression. The Scientist 5(November 25):14. [Go to]
Robinson, K.A. and S.N. Goodman. 2009. Citation of Prior Resesarch in Reports of Clinical Trials. Paper presented at International Congress on Peer Review and Biomedical Publication; Vancouver, British Columbia (Sept. 12).
K.A. Robinson and S.N. Goodman. A Systematic Examination of the Citation of Prior Research in Reports of Randomized, Controlled Trials. Annals of Internal Medicine, Vol. 154, January 4, 2011, p. 51.