Sunday, December 4, 2016

Academic Dishonesty, Post-Peer Review and Debunking Research

As a researcher, publishing is a very important part of my job and ongoing career options. Though most researchers engage in research honestly, and if their results are incorrect, it's more likely due to error than malice, there are still cases in which researchers have fabricated data and even entire studies (for more, see here and here). Recently, a friend brought to my attention yet another instance of research dishonesty - a case that came to light last year, but I only learned about today. What is surprising to me, in this case, is that both the dishonest researcher and the one who debunked the research are (or were at the time) graduate students:
The exposure of one of the biggest scientific frauds in recent memory didn’t start with concerns about normally distributed data, or the test-retest reliability of feelings thermometers, or anonymous Stata output on shady message boards, or any of the other statistically complex details that would make it such a bizarre and explosive scandal. Rather, it started in the most unremarkable way possible: with a graduate student trying to figure out a money issue.
Michael LaCour, a graduate student at UCLA, talked to David Broockman, grad student at UC Berkley, about a multiphase study he performed in which canvassers were able to change respondents attitudes about gay marriage by revealing their sexual orientation. Broockman, who was fascinated by the results, set out to replicate the study and encountered the first issue: Labour's survey had included 10,000 respondents paid $100 a piece, a rather large grant for a graduate student. So Broockman approached polling firms about the study idea - most said the study they couldn't carry out such a study, and if they could, it wouldn't be feasible on the usual grants grad students could obtain.

So Broockman started talking to people - carefully, because he was informed by many, and suspected himself, that exposing another researcher could get him labeled as a troublemaker or incapable of coming up with his own research ideas. And in fact, LaCour had written the paper on the results with a well-respected political scientist at Columbia, Donald Green. Broockman even said when he described the results to others, they were surprised that the results seemed to fly in the face of previous theory and research, but dropped those arguments when they heard Green was involved. In fact, when Jon Krosnick of Stanford was contacted about the study, he said, "I see Don Green is an author. I trust him completely, so I’m no longer doubtful."

Broockman hit many snags along the way, not just because he was a busy grad student working on his own research and finishing his degree - he was cautioned about exploring these issues by nearly everyone he spoke to. An anonymous post on the poliscirumors.com laying out his suspicions was deleted. And his analyses on the distribution of the data, which looked too clean to be real, failed to uncover major issues.

But still, there were hints that something was wrong. When he messaged LaCour with questions about methodology, the answers were vague and unhelpful. A similar study Broockman conducted with fellow grad student Josh Kalla showed response rates for the first wave around 1%, even though they were offering as much money as LaCour, who reported response rates of 12%. An email to the survey research firm LaCour said he had worked with on the study showed that, not only had LaCour never worked with the firm, the person he claimed to be in contact with (and had emails from) didn't exist. Then, they hit gold: a 2012 Cooperative Campaign Analysis Project that was a perfect match for LaCour's "first wave data."
By the end of the next day, Kalla, Broockman, and Aronow had compiled their report and sent it to Green, and Green had quickly replied that unless LaCour could explain everything in it, he’d reach out to Science and request a retraction. (Broockman had decided the best plan was to take their concerns to Green instead of LaCour in order to reduce the chance that LaCour could scramble to contrive an explanation.)

After Green spoke to Vavreck, LaCour’s adviser, LaCour confessed to Green and Vavreck that he hadn’t conducted the surveys the way he had described them, though the precise nature of that conversation is unknown. Green posted his retraction request publicly on May 19, the same day Broockman, Kalla, and Aronow posted their report. That was also the day Broockman graduated. “Between the morning brunch and commencement, Josh and I kept leaving the ceremonies to work on the report,” Broockman wrote in an email.
So what happened to the grad student who was repeatedly cautioned that debunking research could be a career killer? The response he received was "uniformly positive" and, oh, by the way, he's now tenure track at Stanford University. About this issue, he says: "I think my discipline needs to answer this question: How can concerns about dishonesty in published research be brought to light in a way that protects innocent researchers and the truth — especially when it’s less egregious?” he wrote. “I don’t think there’s an easy answer. But until we have one, all of us who have had such concerns remain liars by omission."

I think many of us in the research field have witnessed activities that were questionable, perhaps even clearly unethical. But rarely are we encouraged to bring our suspicions to light, and there are certainly no safe venues to bring up concerns that may or may not be accurate. While I've never been actively discouraged from reporting ethical issues, I'm sure there are many researchers who have, like Broockman. And for many grad students and post-docs, it's more likely they are working with more seasoned faculty than other grad students, so when ethical dilemmas come up, the power dynamic may discourage them from doing the right thing. While we certainly don't want witch hunts for data that looks "too good to be true," we need to find ways to protect fellow researchers and the public from bad science and false data. Because that hurts all of us.

Saturday, December 3, 2016

On Netflix Binges and Emily Dickinson

Yesterday I discovered a series called Black Mirror. It's an anthology show that started on Channel 4, before being picked up by Netflix for its newest season. The most similar show I can think of is The Twilight Zone, which I watched with my dad. Each episode features a stand-alone story with different actors and characters, but they clearly exist in the same universe. One bit of technology that exists across multiple episodes is an implanted device that allows the recording of memories, the ability to see the world through another's eyes through an uplink, and the power to block people completely - when you block someone, they appear as an outline filled in with gray:


You can't see the person; you can't speak to them, in person or otherwise. Even looking at pictures of the person, or remembering events with them, all you see is the gray figure. One character said it was like having your memories vandalized. It's a world that encourages impulsive reactions to others, and those impulsive reactions have lasting consequences.

The current episode I'm watching, Nosedive, explores how people establish their self-worth through social media. In this world, people rate each other - their posts and their interactions. Even total strangers have the ability to rate a person who makes them smile or pisses them off. Your overall rating establishes your place in society, and impacts where you live, what job you can hold, and what medical treatment you can receive. As we follow the main character, Lacie, who is trying to raise her rating so she can get the apartment she wants, she encounters a string of bad luck causing her rating to plummet and everyone around her to shun her. It's much like the block, except now people look at your rating to decide if they're worth your time. They see you, but not really.

Either way, if you're blocked or down-rated, you're nobody. It reminded me of a poem by my favorite poet, Emily Dickinson:
I'm nobody! Who are you?
Are you nobody, too?
Then there's a pair of us -- don't tell!
They'd banish us -- you know!

How dreary to be somebody!
How public like a frog
To tell one's name the livelong day
To an admiring bog!
The message is clear: It's time to stop basing our self-worth on people who don't value us. That may mean accepting that you're nobody to someone. But in the end, what makes the character - and us - happiest are real interactions that aren't based on a momentary up/down judgment.

Friday, December 2, 2016

21 Years

21 years.

That's how long it has been since Nie Shubin was executed for the crimes of rape and murder, crimes that the Supreme People's Court of China just ruled he did not commit:
Amid emotional scenes in the courtroom, judges ruled that Nie's original trial didn't "obtain enough objective evidence," saying there were serious doubts about the time of death, murder weapon and cause of death.

Another man, Wang Shujin, confessed to the crime that Nie was executed for in 2005 -- 10 years after Nie was executed.

For years, it seemed no one would listen, but Zhang [Huanzhi, Nie's mother] later found an unlikely ally in the People's Daily -- the official newspaper of the ruling Communist Party. It ran a scathing commentary in September 2011 that asked: "In a case where someone was clearly wronged, why has it been so difficult to make it right?"

"Rehabilitation means little to the dead, but it means a lot to his surviving family and all other citizens," the paper said. "We can no longer afford to let Nie's case drag on."

Many have viewed Zhang's plight -- and the case involving her only son -- as an egregious example of widespread police torture, deficient due process and lax review of death sentences.
China executes more people each year than any other country, and until 2013, police could use torture to obtain confessions; it wasn't until that year that the Supreme People's Court banned that practice. Psychological research has demonstrated that even without physical torture, police can get people to confess to crimes they did not commit. And they can do it using interrogation tactics that are perfectly legal. Even after the banning of physical torture, how many people in China could have been convicted and executed based on false confessions?

Thursday, December 1, 2016

Keep It Secret, Keep It Safe: Mobile Apps and Your Data

A new study out of Carnegie Melon University suggests that how a mobile app claims it will use your personal data is not always aligned with what it actually does:
An analysis of almost 18,000 popular free apps from the Google Play store found almost half lacked a privacy policy, even though 71 percent of those appear to be processing personally identifiable information and would thus be required to explain how under state laws such as the California Online Privacy Protection Act (CalOPPA).

Even those apps that had policies often had inconsistencies. For instance, as many as 41 percent of these apps could be collecting location information and 17 percent could be sharing that information with third parties without stating so in their privacy policy.

“Overall, each app appears to exhibit a mean of 1.83 possible inconsistencies and that’s a huge number,” said Norman Sadeh, professor of computer science in CMU’s Institute for Software Research. The number of discrepancies is not necessarily surprising to privacy researchers, he added, “but if you’re talking to anyone else, they’re likely to say ‘My goodness!’”

Sadeh’s group is collaborating with the California Office of the Attorney General to use a customized version of its system to check for compliance with CalOPPA and to assess the effectiveness of CalOPPA and “Do Not Track” legislation.
The automated system combines natural language processing and machine learning to analyze privacy policy text, then compares those results to the actual code for the app. It also flags anything in the code that would warrant a privacy policy for apps that don't already have one.

New Periodic Table, Who Dis?

Time to throw out your old periodic table and buy a new one - four elements have been added and now they even have names:
Get ready to ring in 2017 with a brand new Periodic Table, because four more elements have officially been added to the seventh row: nihonium (Nh), moscovium (Mc), tennessine (Ts), and oganesson (Og).

We’ve been hearing about these four new elements since January, but the International Union of Pure and Applied Chemistry (IUPAC) has finally announced that the names have been officially approved, so we’ve got the go-ahead to tear down all our posters and find some new ones.

To get to know our four new friends a little better, nihonium is derived from "Nihon", a Japanese word for Japan, and moscovium honours the Russian capital city, Moscow.

Tennessine is named after the state of Tennessee, known for its pioneering research in chemistry, and it marks the second US state to be honoured on the periodic table. The first was California, referenced by californium (element 98).

Oganesson is named after 83-year-old Russian physicist Yuri Oganessian, and this is only the second time a new element has been named for a living scientist.
The elements were already approved and added to the table with temporary names, as you can see in the picture below. But a new table should be available soon with the actual names. Stay tuned!

By DePiep [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

Wednesday, November 30, 2016

Feed the Trolls, Tuppence a Bag

Actually, don't - really don't - feed the trolls. But if you're going to, try to collect some data from them while you're at it. For anyone who regularly reads the comments, you've likely witnessed the same thing the author of this post, Christie Aschwanden, discusses:
I’d just written a short article that began with a quote from the movie “Blazing Saddles”: “Badges? We don’t need no stinkin’ badges!” After the story published, I quickly heard from readers explaining that, actually, the quote was originally from an earlier movie, “The Treasure of the Sierra Madre.” The thing was, I’d included that information in the article.
In fact, research suggests many people share articles without actually having read them. It seems pretty likely they comment on them without fully reading either. These frustrating incidents got Aschwanden thinking - what makes people comment on an article? To answer this question, she analyzed comments on FiveThirtyEight and collected survey data from 8,500 people. My only complaint is that, though they had a really large sample to work with, their key question was open-ended, so they randomly sampled 500 to qualitatively analyze and categorize. (It would have been awesome if they could have done something with natural language processing - but I digress). Here's what they found from their main question - why people comment:


The top category was to correct an error - this might explain why so many people comment without seeming to have read the article. Either they jumped down to the comment section as soon as they read what they thought was an error (therefore missing information later on) or are so fixated on what they feel is an inaccuracy, they stop really comprehending the rest of the article. They did include a similar close-ended, multiple response item later on that includes the full sample, and the top category was related to "correcting an error" - people are most likely to comment when they know something on the subject that wasn't in the article (although, as demonstrated in Aschwanden's stories, sometimes that information is there):


She offers a few explanations for some of the unusual commenting behavior, including my old pal, the Dunning-Kruger effect. She also reached out to some of FiveThirtyEight's top commenters. Interesting observation (that I'm just going to throw out there before I wrap up this post, because I'm more interested in what you guys think of this): most of the survey respondents (over 70%) and all of the top commenters were men. Thoughts? Speculation on why?

A New Approach to Marketing with Data

Marketing researchers have been using data to influence the direction and sometimes content of advertisements for a long time. But this might be the first time I've seen a company use data to directly generate ad content: Spotify has created a new global ad campaign highlighting some of its users listening habits. The results are pretty brilliant:




They sign off many of the ads with "Thanks, 2016. It's been weird." Yes, it has.