Traditional Eastern medicine is popular among more than the Chinese. I have plenty of friends and neighbors — people with last names like Smith and Schwartz — who also subscribe to nostrums and practices that purportedly have a long history of use in China. But having been used for eons is no guarantee such treatments are efficacious, much less safe. How would we know? It turns out this question is hard to answer — even for the Chinese.
Indeed, after surveying reviews of traditional Chinese medicine, researchers at Lanzhou University in China now raise serious questions about the reliability of reported claims.
Their new paper focuses exclusively on reports published since 1999 in Chinese academic journals, roughly half of which were specialty publications. Clinicians authored half of the papers. Almost 85 percent of the reports focused on herbal remedies — anything from bulk herbs or pills to “decoctions”. Most of the remaining reviews assessed the value of acupuncture, although about one percent of the reports dealt with Tuina massage.
All 369 of the papers reviewed for this analysis were published in Chinese, so any of the deficiencies the Lanzhou team identified cannot be attributed to details that were lost in translation.
The papers were reviews, or what are typically referred to in Western journals as meta-analyses. These tend to offer side-by-side comparisons of treatments that initially had been reported individually. As such, meta-analyses can suffer from the limitation that they’re attempting to compare apples versus oranges — if not apples versus lard.
That said, various organizations have developed standards for evaluating meta-analyses, such as: PRISMA (for Preferred Reporting Items for Systematic Reviews and Meta-Analyses), QUOROM (for Quality of Reporting of Meta-analyses) and AMSTAR (for Assessment of Multiple Systematic Reviews). The Lanzhou researchers assessed how each of the systematic reviews they had collected held up against these benchmarks.
Bottom line: Most papers didn’t hold up very well.
Many of the papers were incomplete, roughly one-third contained statistical errors and others provided data or comparisons that the authors termed misleading. Fewer than half of the surveyed papers described how the data they were presenting had been collected, how those data had been analyzed or how a decision had been made about which studies to compare. The majority of papers also did not assess the risk of bias across studies or offer any information on potential conflict-of-interest factors (such as who funded or otherwise offered support for the research being reviewed).
Overall, the Lanzhou team concludes, compliance with PRISMA, QUOROM and AMSTAR reporting guidelines is poor. Of particular concern, they observe: “None of the studies provided a structured summary, protocol or registration information, or provided a summary of results in the discussion.” As anyone who frequently reads journal articles can attest, these features are all part of the information we use in making a first scan of a paper — and a determination of whether the report deserves deeper scrutiny.
Overall, “the quality of these reviews is troubling,” the Lanzhou researchers conclude in the May 25 PLoS One. This is especially true, they point out, when considering that such peer-reviewed comparisons of therapies should be a key source of evidence-based medicine. Then again, they add, surveys of doctors and nurses in China have indicated that most “had not heard of or did not understand the meaning of evidence-based medicine.”
What shouldn’t get lost in discussions of this analysis is that the new report makes no judgment about the quality of the traditional Chinese medicine being described. Indeed, what the analysts argue is that it’s really not possible to make strong objective assessments of the therapies’ value when so many deficiencies riddle the reporting and analyses of data.
We should all applaud the Lanzhou scientists for attempting to hold their countrymen and –women to the same academic-reporting standards that have become the norm in the West. Because therapeutic pearls (or failures) will risk getting buried unless and until rigor is applied to analyzing data-gathering methods, data analysis and the potential for bias.