Yesterday, I took up one of the primary issues raised in a new paper that was published in the Journal of the American Medical Association today: what name to use for a prescription drug mentioned in a news story written for the general public. Today I address the second, more complicated issue — whether it’s always imperative to cite who paid for a study if those funds came from the drug’s maker.<!—->

Michael Hochman of <!—->CambridgeHospital in the Boston suburbs and his colleagues imply that such industry funding is essential. As they put it: “An increasingly recognized source of commercial bias in medial research is the funding of studies by companies with a financial interest in the results.” So, they argue, news stories may help the public and physicians alike interpret the reported findings of a new drug trial by identifying whether the product’s maker paid for the trial.

In fact, many human trials are funded by pharmaceutical companies. And that makes sense because 1) they’re the parties most interested in the results, 2) they’ll want to know of any newly emerging benefits that may allow them to market their products more widely or risks that may need to be cited on warning labels, 3) they have more money available than most academic institutions do to finance long and costly human trials and 4) they can provide their drugs at virtually no cost to the investigators — and in customized doses, where needed.

We all recognize that a drug company would like its product to be the next aspirin or penicillin — a medicine that proves widely useful for decades if not a century. So we appreciate intuitively why there could be a conflict of interest in their interpretation of data from a human trial. And the only trials they might have much influence over are those they fund themselves.

But pharmaceutical companies appreciate the importance that even the appearance of a potential conflict can have on the believability of a trial’s findings. That’s why many companies recruit experienced outside researchers to design a careful trial that will yield statistically significant findings. Then they may give those scientists or clinicians a pile of money and step back to let them uncover whatever they can, good or bad.

When a company’s influence doesn’t extend much beyond funding a study, the value of that study then should be judged largely on its design and execution.

In other words, it becomes not who paid for a study so much as that

n       it was designed to find real benefits or risks, should they occur with long-term use

n       it yielded statistically significant findings

n       trial data are publicly available for others to review, and

n       those who conducted the trial have been free to interpret and publish findings — good or bad — as they saw fit, and in a prominent journal.

Don’t get me wrong, I think the funding source should be reported by every journal that publishes a study. I also think a reporter should look at who funded the study and perhaps talk with the authors about any role that the company had in influencing how the new study was designed and its data presented. It may even behoove the reporter to get comments from both the researchers and the drug company about a firm’s influence — and include them in the story.

But that’s not a given.

In some cases, drawing attention to a company’s link to a study may render its findings unduly suspect when in fact the trial’s design, execution and interpretation were flawless. As our editor Tom Siegfried puts it: we reporters “have to use judgment about what’s relevant — and the quality of the research.” To his mind, the strength of the statistics employed to interpret the data tend to be more pivotal to a study’s true value than who financed the research.

Donald Kennedy, president emeritus of StanfordUniversity and a past editor of Science magazine, spoke to these issues in a recent Bulletin of the American Academy piece: Science, Policy, and the Media.

He notes an incident where one science writer who was covering stem cells in Washington Monthly, contrasted the views of two researchers on the advantages of working with adult stem cells or those derived from embryos. He then noted the financial ties each had to biotech companies and implied that their opinions might trace to the money they could earn from choosing one potentially commercial approach over the other. And that’s certainly a possibility.

However, Kennedy argues, this author “ignores evidence of a stark difference in competence” between the two scientists. One had “published numerous articles in top-tier peer-reviewed journals, and is widely regarded as an innovative leader in cell biology. He is a member of the National Academy of Science . . . and a recipient of a number of prizes and awards.” The other scientist, by contrast, “has no peer-reviewed publications; his website refers to a letter in Science, which was unreviewed and soon followed by a letter from three distinguished scientists contesting nearly every claim [the letter’s author] had made.”

Kennedy suggests the reporter would have been wise to not only follow the money but the respective scientists’ credentials.

The point I’m trying to make is that our goal as reporters is to inform responsibly. And that usually requires some finesse and nuance. To just count the number of times a publication does or doesn’t identify the funding source of a story may be a red herring. Did the story communicate the science — its strengths and flaws — accurately and clearly? Did it offer some comment from others in the field as a form of perspective?

These questions are far harder to tally. They may take some judgment calls. And that may especially be true for quick news written under tight deadline pressures, which may reflect the majority of stories surveyed for the new JAMA study.

If someone did mention who funded the study, did that make the story better than one that didn’t? That’s what I would have liked the new study to have probed. I’m not sure what the outcome would have been. And I don’t think the authors should be sure either — until they’ve done the tough analysis.

Bottom line: Identify not what’s easiest to quantify but what matters most to the quality of medical writing. Then judge us on that.

Janet Raloff is the Editor, Digital of Science News Explores, a daily online magazine for middle school students. She started at Science News in 1977 as the environment and policy writer, specializing in toxicology. To her never-ending surprise, her daughter became a toxicologist.

More Stories from Science News on Health & Medicine

From the Nature Index

Paid Content