Context | Science News



Support credible science journalism.

Subscribe to Science News today.


Science past and present

Tom Siegfried



Informed wisdom trumps rigid rules when it comes to medical evidence

Systematic reviews emphasize process at the expense of thoughtful interpretation

scientific papers

SOUND SCIENCE  Different ways of reviewing medical evidence serve different purposes, with no form necessarily superior, researchers argue in a recent paper.

Sponsor Message

Everybody agrees that medical treatments should be based on sound evidence. Hardly anybody agrees on what sort of evidence counts as sound.

Sure, some people say the “gold standard” of medical evidence is the randomized controlled clinical trial. But such trials have their flaws, and translating their findings into sound real-world advice isn’t so straightforward. Besides, the best evidence rarely resides within any single study. Sound decisions come from considering the evidentiary database as a whole.

That’s why meta-analyses are also a popular candidate for best evidence. And in principle, meta-analyses make sense. By aggregating many studies and subjecting them to sophisticated statistical analysis, a meta-analysis can identify beneficial effects (or potential dangers) that escape detection in small studies. But those statistical techniques are justified only if all the studies done on the subject can be obtained and if they all use essential similar methods on sufficiently similar populations. Those criteria are seldom met. So it is usually not wise to accept a meta-analysis as the final word.

Still, meta-analysis is often a part of what some people consider to be the best way of evaluating medical evidence: the systematic review.

A systematic review entails using “a predetermined structured method to search, screen, select, appraise and summarize study findings to answer a narrowly focused research question,” physician and health care researcher Trisha Greenhalgh of the University of Oxford and colleagues write in a new paper. “Using an exhaustive search methodology, the reviewer extracts all possibly relevant primary studies, and then limits the dataset using explicit inclusion and exclusion criteria.”

Systematic reviews are highly focused; while hundreds or thousands of studies are initially identified, most are culled out so only a few are reviewed thoroughly with respect to the evidence they provide on a specific medical issue. The resulting published paper reaches a supposedly objective conclusion often from a quantitative analysis of the data.

Sounds good, right? And in fact, systematic reviews have gained a reputation as a superior form of medical evidence. In many quarters of medical practice and publishing, systematic reviews are considered the soundest evidence you can get.

But “systematic” is not synonymous with “high quality,” as Greenhalgh, Sally Thorne (University of British Columbia, Vancouver) and Kirsti Malterud (Uni Research Health, Bergen, Norway) point out in their paper, accepted for publication in the European Journal of Clinical Investigation. Sometimes systematic reviews are valuable, they acknowledge. “But sometimes, the term ‘systematic review’ allows a data aggregation to claim a more privileged position within the knowledge hierarchy than it actually deserves.”

Greenhalgh and colleagues question, for instance, why systematic reviews should be regarded as superior to “narrative” reviews. In a narrative review, an expert in the field surveys relevant publications and then interprets and critiques them. Such a review’s goal is to produce “an authoritative argument, based on informed wisdom,” Greenhalgh and colleagues write. Rather than just producing a paper that announces a specific conclusion, a narrative review reflects the choices and judgments by an expert about what research is worth considering and how to best interpret the body of evidence and apply it to a variety of medical issues and questions. Systematic reviews are like products recommended to you by Amazon’s computers; narrative reviews are birthday presents from friends who’ve known you long and well.

For some reason, though, an expert reviewer’s “informed wisdom” is considered an inferior source of reliable advice for medical practitioners, Greenhalgh and colleagues write. “Reviews crafted through the experience and judgment of experts are often viewed as untrustworthy (‘eminence-based’ is a pejorative term).”

Yet if you really want the best evidence, it might be a good idea to seek the counsel of people who know good evidence when they see it.

A systematic review might be fine for answering “a very specific question about how to treat a particular disease in a particular target group,” Greenhalgh and colleagues write. “But the doctor in the clinic, the nurse on the ward or the social worker in the community will encounter patients with a wide diversity of health states, cultural backgrounds, illnesses, sufferings and resources.” Real-life patients often have little in common with participants in research studies. A meaningful synthesis of evidence relevant to real life requires a reviewer to use “creativity and judgment” in assessing “a broad range of knowledge sources and strategies.”

Narrative reviews come in many versions. Some are systematic in their own way. But a key difference is that the standard systematic review focuses on process (search strategies, exclusion criteria, mathematical method) while narrative reviews emphasize thinking and interpretation. Ranking systematic reviews superior to narrative reviews “elevates the mechanistic processes of exhaustive search, wide exclusion and mathematical averaging over the thoughtful, in-depth, critically reflective processes of engagement with ideas,” Greenhalgh and collaborators assert.

Tabulating data and calculating confidence intervals are important skills, they agree. But the rigidity of the systematic review approach has its downsides. It omits the outliers, the diversity and variations in people and their diseases, diminishing the depth and nuance of medical knowledge. In some cases, a systematic review may be the right approach to a specific question. But “the absence of thoughtful, interpretive critical reflection can render such products hollow, misleading and potentially harmful,” Greenhalgh and colleagues contend.

And even when systematic reviews are useful for answering a particular question, they don’t serve many other important purposes — such as identifying new questions also in need of answers. A narrative review can provide not only guidance for current treatment but also advice on what research is needed to improve treatment in the future. Without the perspective provided by more wide-ranging narrative reviews, research funding may flow “into questions that are of limited importance, and which have often already been answered.”

Their point extends beyond the realm of medical evidence. There is value in knowledge, wisdom and especially judgment that is lost when process trumps substance. In many realms of science (and life in general), wisdom is often subordinated to following rules. Some rules, or course, are worthwhile guides to life (see Gibbs’ list, for example). But as the writing expert Robert Gunning once articulated nicely, rules are substitutes for thought.

In situations where thought is unnecessary, or needlessly time-consuming, obeying the rules is a useful strategy. But many other circumstances call for actual informed thinking and sound judgment. All too often in such cases the non-thinkers of the world rely instead on algorithms, usually designed to implement business models, with no respect for the judgments of informed and wise human experts.

In other words, bots are dolts. They are like a disease. Finding the right treatment will require gathering sound evidence. You probably won’t get it from a systematic review.

Follow me on Twitter: @tom_siegfried

Cosmology,, History of Science

Remembering Joe Polchinski, the modest physicist who conceived a multiverse

By Tom Siegfried 3:01pm, February 27, 2018
String theorists lament the death of Joe Polchinski, one of their field’s most esteemed and respected thinkers.
History of Science

Top 10 papers from Physical Review’s first 125 years

By Tom Siegfried 11:00am, February 8, 2018
The most prestigious journal in physics celebrates its 125th anniversary, highlighting dozens of its most famous papers.
Astronomy,, History of Science

Speed of universe’s expansion remains elusive

By Tom Siegfried 12:52pm, January 16, 2018
A discrepancy between two measures of the universe’s expansion rate suggests the presence of some unknown astronomical feature.
History of Science

2018’s Top 10 science anniversaries

By Tom Siegfried 9:00am, January 5, 2018
2018’s Top 10 anniversaries include notable birthdays and discoveries in math, science and medicine.
History of Science

First controlled nuclear chain reaction achieved 75 years ago

By Tom Siegfried 7:00am, November 29, 2017
The anniversary of the first controlled nuclear chain reaction marks an achievement of immigrants who served America in World War II.
Science & Society,, Clinical Trials,, Biomedicine

Philosophical critique exposes flaws in medical evidence hierarchies

By Tom Siegfried 2:30pm, November 13, 2017
Rankings of research methods for validity of medical evidence suffer from logical flaws, an in-depth philosophical critique concludes.
History of Science

An American astronomical evangelist coined the phrase ‘island universe’

By Tom Siegfried 7:00am, October 13, 2017
Ormsby MacKnight Mitchel, a Civil War general nicknamed ‘Old Stars,’ first used ‘island universe’ in his monthly astronomy magazine.
Quantum Physics

Quantum mysteries dissolve if possibilities are realities

By Tom Siegfried 7:00am, October 1, 2017
Quantum mysteries can be avoided if reality encompasses possibilities as well as actualities, a new paper proposes.
Science & Society,, Numbers

Debates on whether science is broken don’t fit in tweets

By Tom Siegfried 7:00am, September 8, 2017
Amid debates over whether science is broken, many experts are proposing repairs.
Astronomy,, History of Science

Eclipses show wrong physics can give right results

By Tom Siegfried 3:30pm, August 17, 2017
Math for making astronomical predictions doesn’t necessarily reflect physical reality.
Subscribe to RSS - Context