Sunday, December 4, 2016

Academic Dishonesty, Post-Peer Review and Debunking Research

As a researcher, publishing is a very important part of my job and ongoing career options. Though most researchers engage in research honestly, and if their results are incorrect, it's more likely due to error than malice, there are still cases in which researchers have fabricated data and even entire studies (for more, see here and here). Recently, a friend brought to my attention yet another instance of research dishonesty - a case that came to light last year, but I only learned about today. What is surprising to me, in this case, is that both the dishonest researcher and the one who debunked the research are (or were at the time) graduate students:
The exposure of one of the biggest scientific frauds in recent memory didn’t start with concerns about normally distributed data, or the test-retest reliability of feelings thermometers, or anonymous Stata output on shady message boards, or any of the other statistically complex details that would make it such a bizarre and explosive scandal. Rather, it started in the most unremarkable way possible: with a graduate student trying to figure out a money issue.
Michael LaCour, a graduate student at UCLA, talked to David Broockman, grad student at UC Berkley, about a multiphase study he performed in which canvassers were able to change respondents attitudes about gay marriage by revealing their sexual orientation. Broockman, who was fascinated by the results, set out to replicate the study and encountered the first issue: Labour's survey had included 10,000 respondents paid $100 a piece, a rather large grant for a graduate student. So Broockman approached polling firms about the study idea - most said the study they couldn't carry out such a study, and if they could, it wouldn't be feasible on the usual grants grad students could obtain.

So Broockman started talking to people - carefully, because he was informed by many, and suspected himself, that exposing another researcher could get him labeled as a troublemaker or incapable of coming up with his own research ideas. And in fact, LaCour had written the paper on the results with a well-respected political scientist at Columbia, Donald Green. Broockman even said when he described the results to others, they were surprised that the results seemed to fly in the face of previous theory and research, but dropped those arguments when they heard Green was involved. In fact, when Jon Krosnick of Stanford was contacted about the study, he said, "I see Don Green is an author. I trust him completely, so I’m no longer doubtful."

Broockman hit many snags along the way, not just because he was a busy grad student working on his own research and finishing his degree - he was cautioned about exploring these issues by nearly everyone he spoke to. An anonymous post on the poliscirumors.com laying out his suspicions was deleted. And his analyses on the distribution of the data, which looked too clean to be real, failed to uncover major issues.

But still, there were hints that something was wrong. When he messaged LaCour with questions about methodology, the answers were vague and unhelpful. A similar study Broockman conducted with fellow grad student Josh Kalla showed response rates for the first wave around 1%, even though they were offering as much money as LaCour, who reported response rates of 12%. An email to the survey research firm LaCour said he had worked with on the study showed that, not only had LaCour never worked with the firm, the person he claimed to be in contact with (and had emails from) didn't exist. Then, they hit gold: a 2012 Cooperative Campaign Analysis Project that was a perfect match for LaCour's "first wave data."
By the end of the next day, Kalla, Broockman, and Aronow had compiled their report and sent it to Green, and Green had quickly replied that unless LaCour could explain everything in it, he’d reach out to Science and request a retraction. (Broockman had decided the best plan was to take their concerns to Green instead of LaCour in order to reduce the chance that LaCour could scramble to contrive an explanation.)

After Green spoke to Vavreck, LaCour’s adviser, LaCour confessed to Green and Vavreck that he hadn’t conducted the surveys the way he had described them, though the precise nature of that conversation is unknown. Green posted his retraction request publicly on May 19, the same day Broockman, Kalla, and Aronow posted their report. That was also the day Broockman graduated. “Between the morning brunch and commencement, Josh and I kept leaving the ceremonies to work on the report,” Broockman wrote in an email.
So what happened to the grad student who was repeatedly cautioned that debunking research could be a career killer? The response he received was "uniformly positive" and, oh, by the way, he's now tenure track at Stanford University. About this issue, he says: "I think my discipline needs to answer this question: How can concerns about dishonesty in published research be brought to light in a way that protects innocent researchers and the truth — especially when it’s less egregious?” he wrote. “I don’t think there’s an easy answer. But until we have one, all of us who have had such concerns remain liars by omission."

I think many of us in the research field have witnessed activities that were questionable, perhaps even clearly unethical. But rarely are we encouraged to bring our suspicions to light, and there are certainly no safe venues to bring up concerns that may or may not be accurate. While I've never been actively discouraged from reporting ethical issues, I'm sure there are many researchers who have, like Broockman. And for many grad students and post-docs, it's more likely they are working with more seasoned faculty than other grad students, so when ethical dilemmas come up, the power dynamic may discourage them from doing the right thing. While we certainly don't want witch hunts for data that looks "too good to be true," we need to find ways to protect fellow researchers and the public from bad science and false data. Because that hurts all of us.

No comments:

Post a Comment