Analysis gives a glimpse of the extraordinary language of lying

A linguistic analysis may betray research papers that contain faked data, but other questionable practices may be harder to detect.

schatzy/Shutterstock

Dutch social psychologist Diederik Stapel was known for his meteoric rise, until he was known for his fall. His research on social interactions, which spanned topics from infidelity to selfishness to discrimination, frequently appeared in top-tier journals. But then in 2011, three junior researchers raised concerns that Stapel was fabricating data. Stapel’s institution, Tilburg University, suspended him and launched a formal investigation. A commission ultimately determined that of his more than 125 research papers, at least 55 were based on fraudulent data. Stapel now has 57 retractions to his name.

The case provided an unusual opportunity for exploring the language of deception: One set of Stapel’s papers that discussed faked data and a set of his papers based on legitimate results. Linguists David Markowitz and Jeffrey Hancock ran an analysis of articles in each set that listed Stapel as the first author. The researchers discovered particular tells in the language that allowed them to peg the fraudulent work with roughly 70 percent accuracy. While Stapel was careful to concoct data that appeared to be reasonable, he oversold his false goods, using, for example, more science-related terms and more amplifying terms, like extreme and exceptionally, in the now-retracted papers.

Markowitz and Hancock, now at Stanford, are still probing the language of lies, and they recently ran a similar analysis on a larger sample of papers with fudged data.

The bottom line: Fraudulent papers were full of jargon, harder to read, and bloated with references. This parsing-of-language approach, which the team describes in the Journal of Language and Social Psychology, might be used to flag papers that deserve extra scrutiny. But tricks for detecting counterfeit data are unlikely to thwart the murkier problem of questionable research practices or the general lack of clarity in the scientific literature.

“This is an important contribution to the discussion of quality control in research,”Nick Steneck, a science historian at the University of Michigan and an expert in research integrity practices, told me. “But there’s a whole lot of other reasons why clarity and readability of scientific writing matters, including making things understandable to the public.”

To get a sense of whether Stapel’s linguistic clues to fraud were generalizable to lying scientists, Markowitz and Hancock combed the PubMed archive. They identified 253 papers retracted for fraudulent data and 253 unretracted control publications, matching the same journal and publication year whenever possible. Then they examined various factors like readability, preponderance of jargon and terms related to cause, such asbecause and depend. This led to an “obfuscation index,” a summary score that captured the papers’ linguistic sleights-of-hand. Retracted papers scored higher on the index, Markowitz says.

The analysis had a pretty high false-positive rate, 46 percent. But as someone who deciphers dense scientific papers for a living, I don’t think this false-positive rate is unexpected; plenty of legitimate papers are also hard to read and full of jargon. As Steneck told me, there’s a lot of badly written scientific literature out there, in part because there is so much pressure to publish.

Steneck, who literally wrote the book on the responsible conduct of research, says the linguistic analysis could be a useful too for alerting editors to papers that may warrant additional scrutiny. Such computer-aided approaches are already in use; Déjà vu and eTBLAST, for example, aim to deter scientists who are behaving badly. But Steneck notes that the fabricated data, falsification or plagiarism that these computer-assisted approaches aim to catch are less common that questionable research practices. These include things like gaming statistics, unreported evidence or research outcomes, redundant publications and failure to disclose conflicts of interest.

Still, the new analysis provides an interesting peek into how hard it actually is to make stuff up. “It’s difficult for liars to relay information that isn’t real,” Markowitz told me. This finding fits with previous work by Hancock and colleagues that found that fake online hotel reviews have fewer concrete descriptive terms; for most of us, it’s hard to talk about what you’ve never seen.

Interestingly, when the researchers ran Stapel’s papers through the obfuscation analysis they used in their more recent work, his retracted papers didn’t fit the pattern: They weren’t significantly more obtuse than his non-retracted work. The guy could tell a tidy story, a skill that Stapel himself described as necessary for his temporary success, even if the story was made up. And in a book Stapel later wrote about his rise and fall, he gave his own story, a sad yet somehow perfect, tidy ending. As a review of the book notes, the final chapter is both beautifully written and plagiarized: It is loaded with phrases directly lifted from fiction writers Raymond Carver and James Joyce. 

More Stories from Science News on Science & Society

From the Nature Index

Paid Content