In my Writing and Digital Media class, we’ve learned a lot about the importance of looking at new material in new ways, and today I found myself reading in a sort of multimodal fashion without even planning it. Sometimes, thanks to the host of social networks and forums I find myself browsing each week, I manage to come across great articles that I would have never been curious enough to seek out. This is one of those articles, and I’m glad to have found it.
In 2012, John Bohannon discovered that some scientific publications were being less than academically honest. One company in particular, Scientific & Academic Publishing Co., didn’t seem to exist at all.
After months of e-mailing the editors of SAP, I finally received a response. Someone named Charles Duke reiterated—in broken English—that SAP is an American publisher based in California. His e-mail arrived at 3 a.m., Eastern time.
But Bohannon’s article points at a bigger picture. Apparently, many scientific journals don’t just claim to exist in places they don’t – they also claim to thoroughly review submissions when they don’t. To test these academic journals and their peer reviewing skills, he crafted more than 300 versions of a scientific article riddled with obvious errors and ethical problems. In theory, anyone with high-school knowledge of science should have seen fit to reject the articles immediately. This wasn’t the case.
More than half of the journals Bohannon – or should I say “Ocorrafoo Corange” – wrote to accepted his submission. Bohannon even created an interesting (but also confusing) 3D data plot that shows the submissions, which journals accepted or rejected them and the true location of the companies (as revealed by IP addresses from their email correspondences.) Feel free to view the map here, but be warned: red means accepted and green means rejected, because green means good and rejection is what Bohannon wanted.
Acceptance was the norm, not the exception. The paper was accepted by journals hosted by industry titans Sage and Elsevier. The paper was accepted by journals published by prestigious academic institutions such as Kobe University in Japan. It was accepted by scholarly society journals. It was even accepted by journals for which the paper’s topic was utterly inappropriate, such as the Journal of Experimental & Clinical Assisted Reproduction.
PZ Myers of FreeThoughtBlogs also stumbled across this meta-experiment, and did more digging than I did while reading the article. Thanks to Reddit user u/OptimalCynic, who just happens to have a very appropriate username in a field full of people trying to get their research published by these supposedly-legitimate companies, I read his article too.
The other problem [with Bohannon's experiment]: NO CONTROLS. The fake papers were sent off to 304 open-access journals (or, more properly, pay-to-publish journals), but not to any traditional journals. What a curious omission — that’s such an obvious aspect of the experiment. The results would be a comparison of the proportion of traditional journals that accepted it vs. the proportion of open-access journals that accepted it… but as it stands, I have no idea if the proportion of bad acceptances within the pay-to-publish community is unusual or not. How can you publish something without a control group in a reputable science journal? Who reviewed this thing? Was it reviewed at all?
Overall, Myers and the rest of Reddit’s comments on the subject agree that peer reviews require actual, thorough reviewing. As someone who spends lots of time reading her own work and the work of others, legitimately trying to better that work and the person’s thought process, this is equal parts amusing and disappointing. I often wonder, as a college student thirsting after a steady career, why people are able to cut corners as easily as this. Forget India, I know 30 students who would be more than willing to review a scientific research article who live within 15 miles of where I sit.