It turns out that the old adage about statistics and damned lies wasn’t a joke. Sticks and stones may be bonebreakers, and words inflict no (physical) pain, but numbers can kill.
In 2004, for instance, a statistical analysis suggested that antidepressant drugs raised the risk of suicide in youngsters and adolescents, leading the U.S. Food and Drug Administration to require a “black box” warning label. And guess what happened? Suicide rates among kids went up. It seems likely that the dramatic warning discouraged some kids from taking the drugs they needed, later studies suggested. Not only that, but the original statistical evidence was not as conclusive as the FDA had portrayed it, a subsequent statistical analysis showed.
You might wonder, of course, why the statistics were sound in the subsequent study but villainous in the first one. What turns damned lies into valuable truths? In this case, confidence in the later analysis stems from its use of a different statistical philosophy, specifically the approach named for the 18th century clergyman Thomas Bayes.
Bayes proposed a method for calculating probabilities (published in 1764, after his death) that became widely used by mathematicians for well over a century. But Bayesian statistical methods were declared numerica non grata in the early decades of the 20th century, when the now standard methods of statistical analysis were devised and then imposed on the scientific enterprise via brainwashing in graduate school. In recent years, the Bayesian approach has made a comeback, thanks largely to the availability of powerful computers capable of carrying out the often complex Bayesian calculations. But the Bayes rebirth also owes a lot to a handful of statisticians who have long trumpeted its superiority, despite scorn from the standard-statistics community, whose members are known as “frequentists.”