Competitive Academia and Perverse Incentives

Recently, I read Dr. Edwards’s paper entitled “Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition” on the special issue of Environmental Engineering Science. This article focuses on the rising of perverse incentives in academia, their negative effects on scholars, such as damaged research integrity and unethical behavior, and current hypercompetition atmosphere induced by tight funding conditions. As an ongoing PhD student, the content really gets me into deep thinking about our academia and the current evaluation system.

Under the hypercompetition environment with increasing demand for funding opportunities from young scholars, those who distribute the money and positions now preferably evaluate the application strength through quantitative performance metrics. Even for us scholars, we say one specific student is a promising PhD student since his/her recent research findings was published on “Nature Communication” (hypothetically), suggesting that top-tier journals weigh much more than the conveyed scientific information itself. For young scholars, in order to get an advantage in job market or increase the success rate during funding application, boosting their performance metrics seems to be a short cut considering their manipulable feature. Publishing more scientific papers are considered as the easiest step. Research data are cut into perfect amount and submitted to various journals in order to produce more outcome papers. Risky ideas are abandoned at early stage of experiment, and concentration should be placed on those experiment with promising preliminary results. Before the whole experiment is completed, the writing process is already initiated, and the final manuscript will be submitted at the earliest time as possible. Reviewers are expected to complete the critical review process within one month, and once the comments are sent to authors, the revised manuscript will be resubmitted within one week. All these efforts are made towards one purpose, accelerating publishing process for maximized papers being accepted. According to bibliometric analysts Lutz Bornmann and Ruediger Mutz, the estimated increase rate of yearly publication number is closer to 8-9% since 1980, which equates to a doubling of global scientific output every nine years. Along with the increase of publication number, more journals are created to accept numerous low-quality research papers. It seems like we are on the high-speed train towards the golden age of research, but this only happens at the expense of reducing the quality of research.

With more paper published, the scholars can get more citations over the time, letting alone other ignominious approaches to boost citation counts, e.g. intentionally ask for citation during the peer review process. More papers and citations also lead to inflation of journal impact factors (IF). This popular figure announced by Journal Citation Reports every year receives lots of attention. About 80% of the journals included in Journal Citation Reports (JCR) has an increased IF from 1994 to 2005 annually. Right now, we can easily find a journal with a IF over 5 or 6 in environmental engineering and/or science field. But higher IF does not equal to enhanced quality. I should say most of the research published is of little use to current or future potential issues. We should find a effective solution for current fierce competition and quantitative performance metrics oriented system to mitigate and, at the same time, prevent unhealthy development of academia.

2 comments on “Competitive Academia and Perverse Incentives
  1. agcee5804 says:

    Your post reminds me of a sticker with a slogan pasted on the water bottle of another PhD student I know. It reads “I am more than my H-Index.” H-index referring to a metric of accomplishment as an academic that uses number of papers and number of papers cited. If someone has an H-index of 30, they have 30 papers with at least 30 citations each (please correct me if I’m wrong). What bothers me most about this index is that it can be artificially inflated by citing yourself at every chance possible. This manipulation becomes even more pronounced when you have many publications in one area and you can cite yourself multiple times each time you publish something new in that area. I’ve seen people laugh about the number but also stare in awe at the webpage of someone with a large H-index according to google scholar. This number is a weird constraint that we’ve developed for ourselves and I’m not sure if it makes sense anymore (if it ever did).

  2. Deb McGlynn says:

    This is entirely true and unfortunately the norm in academia. I have read a number of articles that have been published in top-tier journals that have fallen short of other papers in the same journal. It is amazing to me that such a bad paper could be published in such a great journal. Then again, after being in academia for a few years I started to learn that the person who published the sub-par article is well renowned in his field of study. This makes the person who is new in academia and trying to publish a much better piece of literature frustrated with the system. How does one get to have that status if they are never given a chance and are constantly overshadowed by better researchers with poor research findings?

Leave a Reply

Your email address will not be published. Required fields are marked *

*