Research can’t be right with ‘Statistics Done Wrong’

Guide for scientists illuminates the many misuses of statistical methods

Statistics Done Wrong
Alex Reinhart
No Starch Press, $24.95

Fraud in science gets a lot of attention and condemnation — as it should. But fraud is relatively infrequent. And it isn’t terribly interesting, says Alex Reinhart in Statistics Done Wrong, “at least, not compared to all the errors that scientists commit unintentionally.”

Most of those inadvertent errors, it seems, result from the abuse or misuse of statistics, the mathematical methods used to test hypotheses and draw inferences from data. Reinhart, who began his scientific career as a physicist but now teaches statistics, describes in pithy and conversational language the many pitfalls of statistical tools, from p values (SN Online: 3/17/15) to regression analysis. He writes mainly for the well-meaning scientists who would like to analyze their data appropriately but have been misinstructed in statistical technique (or not instructed at all) and therefore risk reporting erroneous results.

Of all the books that tackle these issues, Reinhart’s is the most succinct, accessible and accurate assessment of the statistical flaws that render many scientific studies suspect. Testing multiple hypotheses at once, on samples that are too small, using invalid tests, without specifying ahead of time how the data will be analyzed, are all a) very common practices and b) guaranteed to produce many wrong results. And as Reinhart astutely notes, virtually all the incentives in the scientific enterprise (such as getting published and getting tenure) encourage such bad practices and offer no rewards for people who want to do statistics right.

This is a small but important book. It should be required reading for all scientists, especially editors of journals and officials of funding agencies (not to mention science journalists — well, all journalists). It tells a clear and convincing story about a dysfunctional system. It exposes the many errors that scientists commit in their research methods. Reinhart also provides plenty of helpful guidance on how to avoid, or at least limit, many of the pitfalls of poor statistical methodology.

But he also acknowledges that even when statistical methods are applied properly — just as textbooks dictate — they often do not achieve their intended purpose: “Even properly done statistics can’t be trusted,” Reinhart declares. Trust him.

Buy Statistics Done Wrong from Sales generated through the links to contribute to Society for Science & the Public’s programs.

Tom Siegfried

Tom Siegfried is a contributing correspondent. He was editor in chief of Science News from 2007 to 2012 and managing editor from 2014 to 2017.

More Stories from Science News on Math

From the Nature Index

Paid Content