Open Access

Wildlife Biology is an open-access journal focused on wildlife science and management, with the goal of “promoting a scientific basis for the conservation and management of wildlife and of human-wildlife relationships”.  This journal publishes a variety of empirical and theoretical work across several sub-fields within the overall discipline. Submitted manuscripts are evaluated by multiple peer reviewers and a subject editor. Wildlife Biology is published by the Oikos Editorial Office, owned by the Nordic Council for Wildlife Research, and supported by the Office National de la Chasse et de la Faune Sauvage (in France). It was established in 1994, and is published bimonthly. Their web site does not include any position statements specifically addressing the open access movement, but does claim that “publishing your work as Open access will increase the number of readers and make the published results more widely spread” (1).

Their publishing fee (€500 plus and additional €165 value-added tax if applicable) seems comparable or slightly low compared to other journals in the field, considering they do not charge by page. Authors from countries with low average incomes (based on Worldbank classifications) can apply for publishing fee waivers.

I’ll admit that I was a bit skeptical of this journal at first. Open access journals get a bit of a bad rap in our field, with some arguing that it can be easier to get manuscripts accepted due to less rigorous review standards. “Phishing” journals are also a growing problem in the field, leading some professionals to distrust lesser-known journals. I have read some articles of questionable quality in open access journals before. However, in skimming through the articles, this journal appears to be legitimate, with interesting articles and respected scientists included among the authors of recently accepted manuscripts. I’m excited to have found a few articles to add to my “to-read” folder, and another journal to keep in mind for my next paper submission.







While skimming through the case summaries to select one to cover for this assignment, it seems that all of the violations I read about were related to data falsification or fabrication. The resulting investigations led to retractions of papers based on false data and a probation period and/or specific sanctions to ensure that future publications were based on non-falsified data.  Scientists found to have committed repeated violations were barred from contracting with the federal government for future research, usually for a set number of years.

Of these case summaries, one stood out to me because of the severity of the penalties and the serious negative consequences of data falsification:

According to the Office of Research Integrity (1), Paul Kornak, a former researcher at a Veterans Affairs medical center, was indicted on 48 criminal charges stemming from several serious infractions. He pled guilty to 3 of these charges as part of a plea deal, admitting that he had lied about a previous conviction and probation on his job application. He also admitted that while employed at the research center, he falsified information on potential study participants and including people in studies who did not meet inclusion criteria. A 2005 popular press article on the case further elaborated that Mr. Kornak “admitted to doctoring records for at least 27 patients from 1999 to 2002 as coordinator for clinical trials” (2). This falsified paperwork led to the death of one person, who was included in a study of chemotherapy drugs despite previous test results showing “impaired kidney and liver function,” which should have prevented their inclusion in the study (1).

As a result, Mr. Kornak was permanently barred from working with the federal government, sentenced to almost 6 years in prison, and ordered to pay $639,000 back to his former employer and the pharmaceutical companies that he had defrauded as part of his activities.

Given that his actions led to the death of a study participant, I’m surprised that the penalties were not more severe, although the prison sentence was apparently the maximum allowed length for criminally negligent homicide (2). I’m curious as to whether whether there was any additional  civil legal action related to these ethics violations.

As someone who is relatively unfamiliar with the medical research field, I found the fact that researchers would intentionally falsify data during medical studies particularly disturbing, given the potential effects on human health and wellbeing. While I understand that the “publish or perish” attitude prevalent in academia and research creates a lot of pressure for scientists to get results and present them, I don’t see how any publication could be worth falsifying data to get the results that you want, especially knowing that your work is likely to influence the medical care of countless unknown people later on.






University rankings

During our  discussion on mission statements in Monday’s class, we touched on whether prospective students might make decisions on where they want to apply based on university mission statements. While  some students said that they had looked at mission statements, university rankings were mentioned as a more commonly considered factor in decision making. In particular, university rankings were mentioned as being especially important to international students, who may not have the chance to visit campuses before applying, or might need to justify their choice of university to receive government funding.

I was curious about how these rankings are determined, and decided to do some digging. It turns out that there are a lot of different university ranking systems, and each ranking system considers and weights factors differently, resulting in some variation in ranking orders for schools. Despite the variation in order, the specific schools ranked highest were fairly similar across systems.

The top Google search result for “university rankings” was the U.S. News list of 2019 Best National Universities (1). This system places the most emphasis on student outcomes, such as retention and graduation rates, in calculating rankings. It also heavily weights faculty resources and “expert opinion” on the school’s academic quality, according to surveyed academic administrators and high school counselors.

While these factors are all important in comparing schools, I was surprised to see that cost of attendance and financial aid availability for students were not clearly factored into this ranking system.  These two factors strongly influenced my own decision process when applying to universities, along with the school’s general reputation and program offerings.  Financial aid also played a large part in my final decision on where to attend. The other major deciding factor in my final decision was the opportunity to visit campus and meet current students during an interview weekend for the University Honors program, which helped me get a feel for the general campus atmosphere and student satisfaction. It wasn’t until after I was attending Virginia Tech that I began to care about rankings, first concerning campus food (we were #1 in the country!) and then concerning the College of Natural Resources’ high national ranking.

When searching for and applying to graduate school, rankings did not factor into my decision making at all. In my field of study, graduate tuition is typically paid for as part of a teaching or research assistantship, and most students also receive an additional stipend as part of their position. Thus, positions are very competitive, and the application process is more similar to a job application than to an undergraduate application. Advisors with funding will post open positions on job boards and conduct multiple rounds of interviews before accepting a student. Prospective students can also contact advisors whose labs they are interested in joining to inquire about opportunities, but this approach is less likely to result in an offer, as many professors will only accept new students when they have funding for them. It is not until a student has been offered and accepted a specific position with a specific advisor that they are directed to apply to the overall program or university. As such, I contacted several potential advisors and applied for positions, but only ever applied to two graduate schools (one for my M.S., one for my PhD). Similar to my undergraduate application process, I considered program reputations, but my decisions were primarily influenced by advisors’ research interests and open positions.

Classmates: I’m interested to hear how rankings influenced your application and decision process. Comment below and let me know!