Burying potential conflicts of interest
An influential class of biomedical studies on drug efficacy often omits any clues to drug-company involvement
Some of the most cited — and thereby influential — journal articles in pharmaceutical science belong to a category known as meta-analyses: studies that compare findings from two or more related human trials of a drug or treatments for a common disease. A new study now reports evidence that few such meta-analyses identify who funded the drug trials, even though such information could be useful in identifying potential conflicts of interest.
And the kicker: In many cases, authors of the meta-analyses had access to those funding data; they appeared in the papers that initially reported the trials’ results.
This is one of several troubling observations turned up by Michelle Roseman of McGill University in Montreal and her international team of colleagues. The group details its findings in the March 9 Journal of the American Medical Association.
The new analysis focused on randomized controlled trials — ones where patients had been randomly assigned to drug therapy or to control groups (where no treatment occurred). Several studies have shown that when drug companies fund such RCTs, publication of their findings usually report good news. Even when an objective analysis of the data may turn up problems or potential health risks. Sometimes data are dropped that might raise red flags. Or the authors might organize their data in ways that minimize the statistical weight of potentially adverse findings.
And that’s why research journals increasingly have been asking — if not requiring — that authors identify all sources of funding for drug trials, who contributed to interpretations of trial results and who paid for each author’s labors.
But there’s a loophole when it comes to meta-analyses, Roseman’s group observes. It notes that a common policy by journals is to recommend that authors of meta-analyses report their funding source — “but does not address the reporting of conflicts of interest from included RCTs.” Clearly, Roseman’s group argues, that should change.
For its new analysis, Roseman and her coworkers scoured the scientific literature for meta-analyses of drug trials published between January and October 2009 in five top-tier general-medicine journals, in the top-cited journals of five specialty fields (oncology, cardiology, respiratory medicine, endocrinology and gastroenterology) or in the Cochrane Database of Systematic Reviews. The new study pored over the three most recently published qualifying papers from each source — some 29 that together included information from 509 RCTs.
Roughly two-thirds of the studies that reported the RCTs’ initial findings included data on their funding sources. Of these, 68.9 percent identified at least some drug-company funding. Only one-quarter of the RCT studies disclosed author funding sources, but among those that did, 68.9 percent reported at least one author had financial ties to the pharmaceutical industry.
Just two meta-analyses, however, addressed sources of financing for any of the RCTs. In one of these exceptions, the financial information was included — but only in a footnote to a table. In the other meta-analysis, the information was included at the back of the paper in a “characteristics of studies” table that followed the primary text and references.
These do not offer suitable defenses against charges the meta-analyses buried potential conflicts of interests, the JAMA paper’s authors contend, because neither footnotes nor some back-of-the-paper table on study features “are typically reviewed by the average reader.”
Subscribe to Science News
Get great science journalism, from the most trusted source, delivered to your doorstep.
Moreover, none of the meta-analyses — including these two — relayed any information on ties between industry and authors of the original RCT.
Failure to report industry links to pharmaceutical trials “may lead readers to trust the conclusions of a meta-analysis when they potentially should not,” authors of the new JAMA paper argue. At a minimum, they explain, such omissions leave out clues to influence by one or more parties with a vested financial interest in an RCT’s outcome.
There are, of course, plenty of other problems associated with meta-analyses, not the least being that few compare well-matched populations. Some may compare huge studies in men with those conducted solely in a few women or in a group of children. Meta-analyses sometimes include studies that not only focused on very sick people but also others that had newly diagnosed people with mild disease. Others may have compared a trial in people having one disease against those suffering from another disorder. So these analyses may not suffer from comparing the proverbial apples and oranges, so much as comparing apples and tomatoes — or even apples and chocolate-covered peanuts.
A further problem with meta-analyses: Many compare results from several small trials — few if any of which had enough participants to rule out the likelihood that reported findings were due to chance. These studies then imply that they can achieve a virtual population large enough to render any identified associations as statistically sound by simply summing the populations.
There will always be problems with meta-analyses. So why not remove a potentially important unknown — data on who stands to gain from positive or negative findings?
By the way, authors of the new JAMA paper report their own sources of funding and potential conflicts of interest (one author, for instance, reports having been a consultant in lawsuits, including one that challenged a Canadian ban on direct-to-consumer advertising of prescription drugs). But the authors report “no sponsor involvement in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.”