Feedback

Letters for May 8, 2010

A statistical education
Odds are it’s wrong, but the chances that statistics is to blame are slim and fat. Tom Siegfried (“Odds are, it’s wrong,” SN: 3/27/10, p. 26) accurately portrays the importance of statistics in the conduct of science. However, his failure to clearly distinguish between the misuses of statistics and its methodological limitations leads to misleading conclusions about the role of statistics in the proliferation of erroneous scientific results.
Statisticians have long recognized the challenges presented by multiple testing, the interpretation of observational data, and more recently, the analysis of high-dimensional data. Siegfried rightfully acknowledges the many statisticians and biostatisticians who have persistently and repeatedly written eloquently on these issues. He also notes that appropriate methods, such as those for false discovery control, are available to ameliorate the problems. Yet he curiously persists with the theme that statistics is defective, when it is the misuse of statistical methods that is the main culprit in the situations he describes.
Siegfried has fired a shot across the bow of science that although not perfectly on target, serves as a call for further discussion among statistical scientists and researchers. There is a need to educate statistical practi­tioners at all levels, as gross misuse of statistical methods borders on scientific misconduct. However, it is also important to realize that while statistics usually plays the role of the fall guy in these matters, there are other more fundamental factors involved.
Sastry G. Pantula, President, American Statistical Association
Jef Teugels, President, International Statistical Institute
Len Stefanski, Editor, Theory and Methods, Journal of the American Statistical Association

“Odds are, it’s wrong”: Long, confusing, hard to read. Also possibly the most important article you’ve ever published.
Seth Hill, Topanga, Calif.

Tom Siegfried is to be commended for his essay. For someone who taught graduate classes in statistics for the behavioral sciences for almost 40 years, I was gratified to see that someone was still trying to correct the many statistical myths and misconceptions referred to as M and Ms in my first statistics text, Everything You Always Wanted to Know About Statistics but Didn’t Know How to Ask.
After having studied statistics with Wilcoxon, Savage, Bradley, Olkin, Soloman, Parzens and Atkinson, I now understand their frustration with getting us to “say it correctly.” However, even if we say it correctly, statistical inference does not allow us to say very much of value for researchers today. Maybe an overhaul of the entire logical system is in order and I hope your essay is another “beginning.”
James K. Brewer, Professor Emeritus of Behavioral Statistics, Florida State University

Your piece on statistics was very welcome. I think SN should do a lot more of this sort of analysis of the methodology, politics and philosophy of science.
One piece I’d like to see is on “innumeracy.” It fascinates and startles me how little understanding most people have of numbers and their relationships.
James Monaco, Sag Harbor, N.Y.

I just read your editorial and article on flawed statistical analysis of scientific experiments. Perhaps a partial solution would be for a group of good statisticians and analysts to produce a pamphlet illustrating common flaws in analysis, together with illustrations of flawed analysis and of correct analysis.
This could be used together with a checklist or analysis sheet to use during the analysis phase to let the researcher catch any major errors. The pamphlet and checklist could be made universally available at a major scientific organization’s website, such as the National Academy of Sciences, and at major publications. As a further solution, the checklist would have to be submitted along with any potential articles to peer-reviewed publications. This would have the effect of preventing a lot of poorly analyzed articles from being submitted in the first place, and of raising the bar for article submission and publication. There is nothing like having an expert looking over your shoulder to make one do better work.
Bruce MacKay, Portland, Ore.

I laud Mr. Siegfried for bringing to the front the problem with statistical conclusions. However, I was surprised that the concept of causality was not mentioned. For example: “There is a 100 percent correlation between people who die of stomach cancer and having drunk milk as babies.” All kinds of measures can be put to that correlation, but without the test of causality, it’s also wrong.
Fred Marton, Export, Pa.

Kudos to Tom Siegfried for his excellent article. I think we’ve all seen too many of these errors. In trying to find a pithy, Twitterable summary, I hit on the phrase: Statistical significance isn’t. But that’s too absolute, too certain given the probabilistic nature of the topic. So, better yet: Statistical significance isn’t — usually.
Ken Green, Chino Hills, Calif.

Correction
In the article “Happy 20th, Hubble” (SN: 4/10/10, p. 16), the caption entitled “Crash of ’94” on Page 21 contains an error. The picture is a composite of images showing fragments of Comet Shoemaker-Levy 9 heading toward Jupiter. The image does not show the result of the comet’s collision with the gas giant planet. Instead, the black dot visible in the upper left portion of the planet is the shadow of Jupiter’s largest moon, Io. The mark left by the comet crash isn’t visible in the image, but would have been in the planet’s southern hemisphere.

Send communications to: Editor, Science News,1719 N Street, NW, Washington, D.C. 20036 or editors@sciencenews.org. Letters subject to editing.