Today, I’m going to talk about a research misconduct incident that unfolded at UNC Chapel Hill between 2016 and 2018. The researcher responsible for the misconduct was a postdoctoral researcher I’m going to be referring to in this post as Dr. B, for reasons I’ll get into later, although her name is readily available on the US Office of Research Integrity (ORI) website and other places linked to here. The work was all conducted in the Center for Integrative Chemical Biology and Drug Discovery, and much of it was NIH-funded.
The first startling thing about this particular case is the extent of the misconduct. When Dr. B appears on ORI in 2017, it is related to the retraction of a 2016 paper in PLoS One, and out of 8 figures, 6 are listed as falsified or fabricated (not counting the supplemental figures, all fabricated). Most were made up from whole cloth. When I compare the list of fictitious data or figures in the ORI report to the original paper (now covered in bright red retraction notices), it almost seems like it may as well have been fiction written by someone who never stepped inside the lab.
Also potentially surprising to someone not familiar with research misconduct cases is the penalty to which Dr. B “voluntarily agreed”. There is a three-year probationary period in which she is to be supervised while conducting research, a requirement that institutions hiring her certify that real data was collected for any publication or funding request, and a retraction of the paper.
The most surprising, though, was that less than a year later, another report of misconduct from the same researcher was released on ORI. This must have been especially surprising to the ORI, whose URLs for the case reports are based on the misconduct offender’s name without any numbering or random characters, such that they had to add Dr. B’s middle initial the second time around to not break their website.
The second case was also an incident from a 2016 publication, before the additional oversight sanctions went into place, but since Dr. B knew about it when she signed the terms of the first agreement and did not come clean all at once, she was issued with new and tighter sanctions. In what feels particularly bold in the age of neural networks like Google’s reverse image search and YouTube Content ID, she fabricated photos of western blot analyses by copying pieces of old figures from 2013.
Now, I’ve never met Dr. B. Before today. I’d never heard of her, and I’ll bet you hadn’t either. But one of the very first google results for her name is a blog post picking apart the incident, which I believe was written a year ago for the same class assignment I’m currently fulfilling. Obviously it’s not like the other top google results–her ORI case summaries and various retraction notices–paint a glowing picture for any future employers or funders googling Dr. B, but I’m not interested in adding my voice onto that particular pile today. The official misconduct reports tell that story, I think.
While I was originally drawn to the case for the same reasons as the last poster–a repeat offender of such deep and intentional research misconduct–I came to wonder why someone would do such a thing, as I kept reading. Dr. B had a postdoc in a competitive field at a competitive school, and her results had potential direct effects on public health. I moved from being appalled to wanting to understand. What makes a person who’s dedicated their life to academia decide to literally make up data? How did she not change her mind when she was spending all that time writing and photo-shopping? Why didn’t she fess up about both papers when she was caught?
I don’t know the answers for sure. Dr. B hasn’t posted statements anywhere I can find, and I haven’t even been able to figure out if she’s still in academia. She’s certainly not at UNC right now, certainly not working on federally-funded projects anytime soon, but the trail goes cold after the 2018 retraction.
Instead, I found Chapter 6 of the 2017 book Fostering Integrity in Research to offer an interesting perspective based on social science research. People tend to make risky decisions more often to avoid loss–of status, of face, of money, of jobs, of respect, of future prospects–than to gain things they don’t already have such as fame or fortune. Rational decision-making is also impaired by lack of sleep and low blood-glucose, which they call out specifically in the chapter. To me, that all sounds like a pretty familiar research environment.
Obviously not everyone who has a stressful or unproductive experience in one lab or another fabricates data, and it’s good that we have systems to catch misconduct when it happens. But when you think about how research involves trying to answer questions you don’t yet know the answers to, and only some of those answers are publishable–when I think about all of my friends who’ve stayed an extra year in a program not because they weren’t working but because their initial project “didn’t work out”–I don’t think it’s that crazy to imagine someone could get to the point where they felt like they had no legitimate paths to the end of their degree, or postdoc, or to the funding they needed.
I think, actually, that research specifically into the motivations behind misconduct would be fascinating and is much-needed to target interventions for research integrity. But, for now, I pose essentially the same question that the last blog I found on this topic posed: What are we hoping to accomplish with the punishments we dole out to research scientists? And are we achieving that when we put scientists on a three-year probation, put marks in their permanent record, and conclude that the environment they were in had nothing to do with it?