Informed wisdom trumps rigid rules when it comes to medical evidence

Systematic reviews emphasize process at the expense of thoughtful interpretation

scientific papers

SOUND SCIENCE  Different ways of reviewing medical evidence serve different purposes, with no form necessarily superior, researchers argue in a recent paper.

Elnur/Shutterstock

Everybody agrees that medical treatments should be based on sound evidence. Hardly anybody agrees on what sort of evidence counts as sound.

Sure, some people say the “gold standard” of medical evidence is the randomized controlled clinical trial. But such trials have their flaws, and translating their findings into sound real-world advice isn’t so straightforward. Besides, the best evidence rarely resides within any single study. Sound decisions come from considering the evidentiary database as a whole.

That’s why meta-analyses are also a popular candidate for best evidence. And in principle, meta-analyses make sense. By aggregating many studies and subjecting them to sophisticated statistical analysis, a meta-analysis can identify beneficial effects (or potential dangers) that escape detection in small studies. But those statistical techniques are justified only if all the studies done on the subject can be obtained and if they all use essential similar methods on sufficiently similar populations. Those criteria are seldom met. So it is usually not wise to accept a meta-analysis as the final word.

Still, meta-analysis is often a part of what some people consider to be the best way of evaluating medical evidence: the systematic review.

A systematic review entails using “a predetermined structured method to search, screen, select, appraise and summarize study findings to answer a narrowly focused research question,” physician and health care researcher Trisha Greenhalgh of the University of Oxford and colleagues write in a new paper. “Using an exhaustive search methodology, the reviewer extracts all possibly relevant primary studies, and then limits the dataset using explicit inclusion and exclusion criteria.”

Systematic reviews are highly focused; while hundreds or thousands of studies are initially identified, most are culled out so only a few are reviewed thoroughly with respect to the evidence they provide on a specific medical issue. The resulting published paper reaches a supposedly objective conclusion often from a quantitative analysis of the data.

Sounds good, right? And in fact, systematic reviews have gained a reputation as a superior form of medical evidence. In many quarters of medical practice and publishing, systematic reviews are considered the soundest evidence you can get.

But “systematic” is not synonymous with “high quality,” as Greenhalgh, Sally Thorne (University of British Columbia, Vancouver) and Kirsti Malterud (Uni Research Health, Bergen, Norway) point out in their paper, accepted for publication in the European Journal of Clinical Investigation. Sometimes systematic reviews are valuable, they acknowledge. “But sometimes, the term ‘systematic review’ allows a data aggregation to claim a more privileged position within the knowledge hierarchy than it actually deserves.”

Greenhalgh and colleagues question, for instance, why systematic reviews should be regarded as superior to “narrative” reviews. In a narrative review, an expert in the field surveys relevant publications and then interprets and critiques them. Such a review’s goal is to produce “an authoritative argument, based on informed wisdom,” Greenhalgh and colleagues write. Rather than just producing a paper that announces a specific conclusion, a narrative review reflects the choices and judgments by an expert about what research is worth considering and how to best interpret the body of evidence and apply it to a variety of medical issues and questions. Systematic reviews are like products recommended to you by Amazon’s computers; narrative reviews are birthday presents from friends who’ve known you long and well.

For some reason, though, an expert reviewer’s “informed wisdom” is considered an inferior source of reliable advice for medical practitioners, Greenhalgh and colleagues write. “Reviews crafted through the experience and judgment of experts are often viewed as untrustworthy (‘eminence-based’ is a pejorative term).”

Yet if you really want the best evidence, it might be a good idea to seek the counsel of people who know good evidence when they see it.

A systematic review might be fine for answering “a very specific question about how to treat a particular disease in a particular target group,” Greenhalgh and colleagues write. “But the doctor in the clinic, the nurse on the ward or the social worker in the community will encounter patients with a wide diversity of health states, cultural backgrounds, illnesses, sufferings and resources.” Real-life patients often have little in common with participants in research studies. A meaningful synthesis of evidence relevant to real life requires a reviewer to use “creativity and judgment” in assessing “a broad range of knowledge sources and strategies.”

Narrative reviews come in many versions. Some are systematic in their own way. But a key difference is that the standard systematic review focuses on process (search strategies, exclusion criteria, mathematical method) while narrative reviews emphasize thinking and interpretation. Ranking systematic reviews superior to narrative reviews “elevates the mechanistic processes of exhaustive search, wide exclusion and mathematical averaging over the thoughtful, in-depth, critically reflective processes of engagement with ideas,” Greenhalgh and collaborators assert.

Tabulating data and calculating confidence intervals are important skills, they agree. But the rigidity of the systematic review approach has its downsides. It omits the outliers, the diversity and variations in people and their diseases, diminishing the depth and nuance of medical knowledge. In some cases, a systematic review may be the right approach to a specific question. But “the absence of thoughtful, interpretive critical reflection can render such products hollow, misleading and potentially harmful,” Greenhalgh and colleagues contend.

And even when systematic reviews are useful for answering a particular question, they don’t serve many other important purposes — such as identifying new questions also in need of answers. A narrative review can provide not only guidance for current treatment but also advice on what research is needed to improve treatment in the future. Without the perspective provided by more wide-ranging narrative reviews, research funding may flow “into questions that are of limited importance, and which have often already been answered.”

Their point extends beyond the realm of medical evidence. There is value in knowledge, wisdom and especially judgment that is lost when process trumps substance. In many realms of science (and life in general), wisdom is often subordinated to following rules. Some rules, or course, are worthwhile guides to life (see Gibbs’ list, for example). But as the writing expert Robert Gunning once articulated nicely, rules are substitutes for thought.

In situations where thought is unnecessary, or needlessly time-consuming, obeying the rules is a useful strategy. But many other circumstances call for actual informed thinking and sound judgment. All too often in such cases the non-thinkers of the world rely instead on algorithms, usually designed to implement business models, with no respect for the judgments of informed and wise human experts.

In other words, bots are dolts. They are like a disease. Finding the right treatment will require gathering sound evidence. You probably won’t get it from a systematic review.

Follow me on Twitter: @tom_siegfried

Tom Siegfried is a contributing correspondent. He was editor in chief of Science News from 2007 to 2012 and managing editor from 2014 to 2017.

More Stories from Science News on Science & Society

From the Nature Index

Paid Content