Few areas in scholarly publishing are undergoing the kind of examination and change that peer review is currently undergoing. Healthy debates continue on different models of peer review, incentivizing peer reviewers, and various shades of open peer review, among many other issues. Recently, the second annual Peer Review Week was held, with several webinars available to view.
Since peer review is currently such a dynamic topic, the University Libraries and the Department of Communication are especially pleased to host a talk about peer review in science by Dr. Malte Elson of Ruhr University Bochum. Dr. Elson is a behavioral psychologist with a strong interest in meta-science issues. Dr. Elson has created some innovative outreach projects related to open science, including FlexibleMeasures.com, a site that aggregates flexible and unstandardized uses of outcome measures in research, and JournalReviewer.org (in collaboration with Dr. James Ivory in Virginia Tech’s Department of Communication), a site that aggregates information about journal peer review processes. He is also a co-founder of the Society for Improvement of Psychological Science, which held its first annual conference in Charlottesville in June. Details and a description of his talk, which is open to the public, are below. Please join us! (For faculty desiring NLI credit, please register.)
Wednesday, October 12, 2016, 4:00 pm
Newman Library 207A
Is Peer Review a Good Quality Management System for Science?
Through peer review, the gold standard of quality assessment in scientific publishing, peers have reciprocal influence on academic career trajectories, and on the production and dissemination of knowledge. Considering its importance, it can be quite sobering to assess how little is known about peer review’s effectiveness. Other than being a widely used cachet of credibility, there appears to be a lack of precision in the description of its aims and purpose, and how they can be best achieved.
Conversely, what we do know about peer review suggests that it does not work well: Across disciplines, there is little agreement between reviewers on the quality of submissions. Theoretical fallacies and grievous methodological issues in submissions are frequently not identified. Further, there are characteristics other than scientific merit that can increase the chance of a recommendation to publish, e.g. conformity of results to popular paradigms, or statistical significance.
This talk proposes an empirical approach to peer review, aimed at making evaluation procedures in scientific publishing evidence-based. It will outline ideas for a programmatic study of all parties (authors, reviewers, editors) and materials (manuscripts, evaluations, review systems) involved to ensure that peer review becomes a fair process, rooted in science, to assess and improve research quality.