Evidence-based medicine actually isn’t

In medical practice, the concept of evidence shares a lot with Saint Augustine’s understanding of time.

He understood time perfectly well, of course, until somebody asked him to explain it. Medical evidence is similar. Everybody thinks they know what evidence means, but defining what counts as evidence is about as easy as negotiating peace in the Middle East. As a result, demands for “evidence-based medicine” pose some serious practical problems. In fact, the label “evidence based” applied to medicine has been confused, abused and misused so much lately that some experts suggest that the evidence-based medicine movement is in a state of crisis.

As evidence for that sweeping generalization, I cite a recent paper by Trisha Greenhalgh (a physician and health care professor) and collaborators, titled “Evidence-based medicine: a movement in crisis?” The question mark notwithstanding, Greenhalgh and colleagues document several aspects of evidence-based medicine that clearly illustrate critical problems with its implementation.

For one thing, the “evidence based” brand has been coopted by special interests (such as companies that make medicines) to further their commercial interests. Companies often define both the disease and its evidence-based treatment:female sexual arousal disorder (treat with sildenafil); male baldness (treat with finasteride); low bone density (treat with alendronate). It’s almost like what counts as a disease (or a disease “risk factor”) depends on whether there is evidence for a drug to “treat” it.

“Evidence-based medicine has drifted in recent years from investigating and managing established disease to detecting and intervening in non-diseases,” Greenhalgh and colleagues wrote in BMJ in June.

Furthermore, the “evidence” favoring various treatments typically comes from trials in which companies decide which drugs to test, at what doses, on how many people. Often the experimental statistical “evidence” in such studies establishes a benefit for a drug that is of little practical value; the supposed benefit, while perhaps real in a mathematical sense, is so slight as to be meaningless for real patients. In other cases, perfectly sound evidence regarding a particular disease is rendered irrelevant in patients afflicted with more than one disorder (patients who are commonly seen in medical practice, but typically excluded from the trials that produced the evidence).

On top of all that, Greenhalgh and colleagues point out, the sheer volume of medical evidence makes assessing it all intelligently essentially impossible. Even the guidelines summarizing the evidence are too voluminous to be useful to doctors.

“The number of [evidence based] clinical guidelines is now both unmanageable and unfathomable,” Greenhalgh and coauthors note. In one 24-hour period in 2005, for instance, a hospital in the United Kingdom admitted 18 patients with 44 diagnoses. The relevant U.K. national guidelines for those patients totaled 3,679 pages. Estimated reading time: 122 hours.

Of course, Greenhalgh and colleagues aren’t arguing against the need for evidence. Rather they are saying the evidence needs to be better, better explained, and more useful to practicing physicians. For one thing, evidence-based medicine should focus more on individual patients, taking their personal differences and needs into account. Evidence-based guidelines should allow for expert judgment to be applied to specific cases, not just blind adherence to algorithmic rules based on statistical averages.

Progress in this regard will require higher publishing standards in medical journals. “Journals editors … should raise the bar for authors to improve the usability of evidence, and especially to require that research findings are presented in a way that informs individualized conversations,” Greenhalgh and collaborators insist.

One further (and particularly pernicious) publication problem involves the evidence that doesn’t get published at all. It might seem like evidence to say that two published studies show that a drug works. But that evidence wouldn’t be so compelling if you knew that three unpublished studies found the drug to be worthless.

Sadly, there is no doubt that much medical research does in fact go unpublished — or perhaps is inappropriately altered or selectively presented in some way to make it publishable. Greenhalgh and colleagues cite one report finding that of 38 studies finding a positive outcome for antidepressants, 37 were published. Of 36 studies with negative results, only 14 were published.

Many other studies have reported evidence that papers with positive (or statistically significant) findings are more likely to get published. One 2008 study, for instance, analyzed 16 papers investigating publication bias in randomized clinical trials and found clear indications of selective publication. Not only were some negative trial results never published, but also even in published papers some non-statistically significant findings were omitted. As a result, the “evidence” in published papers is often skewed, favoring positive results.

“There is strong evidence of an association between significant results and publication,” Kerry Dwan and collaborators wrote in PLOS ONE. “Studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported.”

In any event, these biases are mere symptoms of the many lamentable maladies afflicting medical evidence in the published literature. Much of the evidence that evidence-based medicine assumes is baloney, based on malfeasance, misunderstandings and faulty methodology. Apparently what we need is evidence-based evidence.

Follow me on Twitter: @tom_siegfried

Tom Siegfried is a contributing correspondent. He was editor in chief of Science News from 2007 to 2012 and managing editor from 2014 to 2017.

More Stories from Science News on Health & Medicine

From the Nature Index

Paid Content