Atrazine paper’s challenge: Who’s responsible for accuracy?

Study claims to have turned up many dozens of errors and misleading statements in a review of published data.

Buried within a new paper discussing conflict-of-interest issues is an intriguing little case study. It looks at risks to wildlife from atrazine — a widely used herbicide — as assessed by a massive peer-reviewed analysis of published data. It charges that this analysis “misrepresented over 50 studies and had 122 inaccurate and 22 misleading statements.”

Strong charges, and worth investigating. But after talking to the lead authors of the review paper and its critique, I come away with a suspicion that the real take-home message here is about something quite different: publishing’s ability to vet massive quantities of scientific information.

The issue emerges in an examination of a 2008 paper in Critical Reviews in Toxicology. “Based on a weight of evidence analysis of all of the data,” it concluded, “the central theory that environmentally relevant concentrations of atrazine affect reproduction and/or reproductive development in fish, amphibians, and reptiles is not supported by the vast majority of observations.” For many other potential toxic endpoints, it said, “there is such a paucity of good data that definitive conclusions cannot be made.”

As part of a paper released early online, ahead of print, in Conservation Letters, Jason Rohr and Krista McCoy of the University of South Florida, in Tampa, critiqued the accuracy of that 2008 review. In doing so, they say they uncovered “an important contemporary example of a conflict of interest resulting in a potential illusion of environmental safety that could influence [regulation].”

Alleged bias
That conflict of interest, they charge, stems from the fact that the review “and the research of many of its authors were indirectly or directly funded by the company that produces the herbicide, Syngenta Crop Protection.”

Ecologist Keith Solomon of the University of Guelph in Ontario, Canada, agrees that any review of such a high-volume chemical needs to be credible and scientific. He’s also the lead author of the disputed review — and argues that receiving industry funds doesn’t de facto render his paper’s conclusions biased or bogus. He argues that the value of research should be measured simply by how it was conducted and interpreted.

Which is true — except that plenty of studies have identified trends showing that organizations with a vested financial interest in a particular outcome often disproportionately report findings advantageous to their sponsors.

Rohr and McCoy claim to have identified just such a trend in the Solomon et al review of atrazine data. Of 122 alleged mistakes that the pair tallied, all but five would have benefited Syngenta and the safety assessment of atrazine, they say. And among the five outliers, only one mistake would work against Syngenta’s interests; the rest are neutral. All 22 misleading statements, they say, come out in Syngenta’s favor.

Dueling charges
Rohr, an ecologist, studies amphibians and possible threats posed to them from environmental agents and other factors. So he was familiar with what his colleagues had been publishing over the years about atrazine when the Solomon et al paper came out. And as he read the review, certain statements raised an eyebrow. They didn’t seem to reflect what he’d remembered the cited papers as having concluded.

Others things in the review just angered him. Like when it discounted three of his team’s papers for not having quantified the actual concentrations of atrazine to which his frogs were exposed. “In fact, we did report those actual concentrations,” Rohr says.

Such issues prompted Rohr and McCoy to conduct their own meta-analysis of data investigating atrazine’s impacts on wildlife. The study was reported earlier this year in Environmental Health Perspectives — and reported a host of apparently deleterious impacts attributable to the herbicide.

While working on this analysis, they probed the Solomon et al review in detail. It took “a very long time,” Rohr says. And turned up some surprises. Like a claim in one cited paper that atrazine caused no potentially deleterious changes at exposures below 100 parts per billion. “We went back to the study and found effects started at just 10 ppb,” Rohr says.

In another instance, the review claimed that a cited study had reported the survivorship of frogs in a tank with no atrazine as 15 percent. “In fact, it was 85 percent,” Rohr found.

The USF pair documents its criticisms in a 45-page online supplement.

“A lot of these were the kinds of comments you might get from reviewers,” Solomon says. “And there are some that may actually be reasonable.”

Indeed, he adds, it’s expected that “in scientific discourse, everybody critically evaluates what everybody else does and says.” A paper offers hypotheses or conclusions and then looks to see if someone can shoot them down. “That’s what science is all about,” Solomon says.

Talk to him long enough and you’ll see he has his own laundry list of critiques about the Conservation Letters analysis. Some challenged portions of his paper’s text were taken out of context, he claims — and wouldn’t look fishy if read in their entirety. (He pointed, for instance, to a complaint that one passage in his group’s review didn’t cover all of the toxicological data in one area. “In fact,” he says, “the preceding statement says this material was reviewed in a book published in 2005. We summarized its information, but never said it was supposed to be all-inclusive.”)

Solomon also charges that Rohr and McCoy “have a fundamentally different approach to how they look at the toxicology data. We applied guidelines for causality” used in much of biology, Solomon argues. They involve things like: “Is there consistency across several studies? Is there a plausible mechanism of action? Is there a concentration response?”

By contrast, he says, what Rohr’s team does “is look for any difference between some exposure and the control. And is it statistically significant?” Sometimes these purported atrazine effects showed no dose response or ran counter to what’s known about the biology of the animal, Solomon says.

Rohr, as might be expected, challenges Solomon’s assessment.

Bottom line: These teams should duke it out in the literature and let their colleagues weigh in on the strengths of their respective arguments.

Reviews: a special case?
The Solomon et al review ultimately ran 50 pages in its published form and by my rough count cited at least 194 separate papers, five meeting abstracts, four industry documents, a master’s thesis, four government review documents, a conference synopsis and portions of about a dozen books. Can we honestly expect that errors won’t slip through when authors attempt to manage boatloads of data? And if they do, who’s to catch them?

Peer reviewers?

Keep in mind that peer review is a volunteer enterprise. No one gets paid. So there is little incentive for a reviewer to spend weeks or more anonymously ferreting out potential errors from a gargantuan manuscript. His or her task should be finding egregious errors and leaving the small stuff to a journal’s staff.

The problem: Many journals have little more than copyeditors on staff, people to make sure that style is consistent in the presentation of data and that the text’s grammar is correct.

At the International Congress on Peer Review and Biomedical Publication, last September, several journals noted they not only had staff to perform a detailed checking of facts, but also in-house statisticians to ensure the appropriate tests of significance were used and reported. These tended, not surprisingly, to be the bigger, wealthier journals. An oxymoron in environmental science.

Owing to the importance of good literature reviews, perhaps humongous enterprises like the one by Solomon’s group shouldn’t be published in journals where nitpicky fact checking is not the rule. After all, whether 10 versus 100 ppb concentrations of a toxic chemical provoke harm could prove pivotal to regulators looking to evaluate the safety of a product or procedure.

Sure, fact checking takes time and costs money. But look at all of the resources wasted when good science is misrepresented in the court of public policy. We should view these costs as societal investments to make sure that the science on which we depend is reported — and ultimately employed — truthfully.

Janet Raloff is the Editor, Digital of Science News Explores, a daily online magazine for middle school students. She started at Science News in 1977 as the environment and policy writer, specializing in toxicology. To her never-ending surprise, her daughter became a toxicologist.

More Stories from Science News on Earth