The Virtues of Openness: Education, Science, and Scholarship in the Digital Age by Michael A. Peters and Peter Roberts (2012) will appeal to anyone interested in open movements in respect to academia. The book includes eight chapters (three of which were previously published), an introduction, postscript, and extensive references. The authors are both professors of education in New Zealand (Peters at the University of Waikato; Roberts at the University of Canterbury).
In addition to exploring the many aspects of openness (which makes defining it so difficult), the authors make an important point that bears remembering when we are tempted by binary conceptions such as open or not (Introduction, p.6):
All open systems have limits, and there are limits to openness– limits to “open” markets, to open societies, to open code.
It’s a theme the authors return to repeatedly, particularly in the context of the philosophy of education, also noting that these limits can serve positive functions.
Chapter 4, “Open Education and Open Knowledge Production” (p. 55-76) covers the serials crisis and open access with the greatest depth, but I learned the most from Chapter 2, “The Philosophy of Open Science” (p. 30-42), and Chapter 3, “Openness as an Educational Virtue” (p. 43-54). Chapter 2 begins by emphasizing the narratives of openness in the West and their relation to Enlightenment thought, in particular the ways in which openness is freedom. The bulk of the chapter goes on to consider philosophies of openness from various thinkers. Of particular interest are the connections between thinkers leading up to the concept of open access for scholarly literature. Karl Popper (author of The Open Society And Its Enemies) was a strong influence on George Soros, whose Open Society Institute (now the Open Society Foundations) was the driving force behind the Budapest Open Access Initiative. Here the political conception of an open society contains both the market and science as primary institutions based on shared values of freedom and truth, though it’s worth noting how often these institutions are in conflict today. Indeed, the authors make this clear in their summary of Chapter 4 (p. 76):
In essence, the open knowledge economy provides a completely different model to the neoliberal knowledge economy and also challenges the underlying neoliberal ideas of ownership, authorship, human capital, and intellectual property rights as well as principles of the access, distribution, and creation of knowledge.
In chapter 3 (“Openness as an Educational Virtue”), a philosophy of openness in pedagogy focuses on the work of Brazilian educationist Paulo Freire. Openness, the authors contend, includes but is not limited to open-mindedness, and is contrasted with forms of closure such as “dogmatism, excessive certainty, and an unreflective rejection of either the old or the new” (p. 44). Throughout his career, Freire identified human characteristics of value in teaching and learning situations, such as “humility, the ability to listen, showing care and respect for those with whom we work in educational settings, tolerance, an inquiring and investigative frame of mind, and a willingness to take risks” (p. 49). For Freire, openness is a permanent orientation to life itself, recognizing that we are always unfinished beings.
The authors view the university emphasis on performance as a form of closure in Chapter 5 (“Scholarly Publishing and the Politics of Openness: Knowledge Production in Contemporary Universities”), stating that “it is performance, not knowledge, that counts” (p. 82, 83):
Performance, as measured by lists of “outputs,” becomes the accepted substitute for knowledge and is seen as translatable across individuals, departmental groupings, disciplines, and institutions.
And these outputs must be quantified (p. 84):
…research activity counts only insofar as it is measurable. Behind this trend lies a quest for certainty, a discomfort with that which is complex or messy, and an inability to deal with the immeasurable.
Although the ongoing revolution in scholarly communication also relies in part on measurables such as review and altmetric scores, it shows a willingness to deal with uncertainty and the immeasurable through open peer review and post-publication peer review. The authors identify an interesting casualty of the culture of performativity, which is time, or the lack of it (p. 87). One problem not addressed here is that being open is more time-consuming in the current publishing environment. Those who want to be open must make additional effort, whether it is seeking out an open access journal, archiving their manuscript, or organizing, describing, and providing access to their data. Clearly norms must change so that closure is not the easy, time-saving alternative. Time pressures also drive quantification in academic evaluation, since it is faster to look at scores than to read someone’s scholarship.
At times The Virtues of Openness shows considerable overlap in chapter topics; at other times the transitions between chapters are jarring, perhaps because some were previously published. As such, the book feels like a collection of chapters rather than a connected narrative. Also, a few of the figures and tables are either not particularly enlightening (“Applications of openness” on p. 67) or outdated (2001 scientific publishing market players, p. 58).
However, this volume is a solid resource for those interested in exploring the thinkers who have contributed to the philosophy of openness in a variety of disciplines. The authors do an admirable, mostly jargon-free job of introducing and clarifying different aspects of openness, and of emphasizing its limits.
Over the weekend I got around to watching The Internet’s Own Boy: The Story of Aaron Swartz which saw wide release a couple of weeks ago (see the viewing options on the distributor’s site or watch on the Internet Archive- it’s CC-licensed).
It’s a fascinating documentary that should be required viewing for anyone interested in access to information. Swartz’s immense intelligence and idealism shine through, as does his love of libraries and the information contained in them. His precocity and deep understanding of the Internet resulted in numerous successes, some of which I wasn’t previously aware of.
Ultimately it was his willingness to put his name on the Guerilla Open Access Manifesto, followed by action to presumably realize it (though his intent is unknown), that made him a target of prosecution. Watching this part of the documentary, the harshness of the law becomes clear, and one wonders what part of the law legitimizes prosecution for the purpose of making an example of someone.
While I don’t agree with the manifesto’s method, or even much of its language, Swartz was certainly right about open access, just as he was right about public access to court records and the potential harm of SOPA/PIPA. Even if all new peer-reviewed literature is openly available tomorrow, we are still left with the intractable fact that we have allowed centuries of scholarship to be enclosed, and libraries will be paying rent (for the fortunate few) for decades to come.
When this CC-licensed documentary can be taken off of YouTube, and when lawyers are preparing for the return of SOPA/PIPA, the contest between advocates of openness and the forces of enclosure is hardly over. Despite his tragic end, we have a model of courage when we insist that our practices follow our ideals– in particular, that public knowledge should never be enclosed. In that sense, Aaron Swartz is still with us.
Registration has recently opened for the Open Knowledge MOOC, a course that introduces the concept of openness and covers open access, open science, and open education, among other open movements. Hosted on the OpenEdX platform by Stanford University, this is a semester-long course that runs from September 3 to December 12, 2014. The course material for Week 12, “Student Publishing: Lessons in Publishing, Peer Review, and Knowledge Sharing” was selected or developed by librarians at Virginia Tech, in collaboration with our partner library at the Cape Peninsula University of Technology in Cape Town, South Africa.
I’m a member of the team at the University Libraries that worked on the “Student Publishing” module, along with Anita Walz, Paul Hover, Jennifer Nardine, and Scott Pennington. A brief presentation describing our work, “Student Publishing: An Open, Global Learning Module” was made at the Dean’s Forum on Global Engagement in March 2014. The module includes readings, videos, assignments, and classroom activities (for the blended version offered by several universities around the world). If you take the course, we would love to hear feedback about ways to improve the module.
During his visit to Virginia Tech last October, John Willinsky told us about planning for the course, and suggested that we contribute to it. We chose Student Publishing for our module, planning to reach out to student journals on campus to strengthen ties to the library. Due to time constraints, that outreach is still in progress, but one potential outcome would be hosting through our e-journal publishing services. Student journals are challenged by frequent transitions in their editorial staff, with a resulting loss of information and expertise. Library hosting would ensure that proper transfer of administrative information happens, and librarians can also advise on indexing, copyright/licensing, and preservation.
The vagueness of the term “open” combined with a lack of critical examination leaves plenty of room for openwashing, and MOOCs are no exception. Given its subject, it is particularly important that the Open Knowledge course embody open practices rather than merely suggest them. This course is different from traditional MOOCs in its connectivist approach (see xMOOC vs. cMOOC), its Creative Commons Attribution Share-Alike (CC-BY-SA) licensing, its crowdsourced content, and its emphasis on the re-use of existing openly licensed educational resources. In addition, course modules will remain accessible afterward, unlike proprietary MOOCs. It’s as open as we could make it, so I hope you’ll give it a try.
Last week Tim Gowers wrote an extensive post on the cost of Elsevier journals that begins to create some transparency in this market. Much of the data so far is from UK universities, but cost data from U.S. universities (including other publishers) should be available soon from Ted Bergstrom’s Big Deal Contract Project.
Providing adequate funding for open access platforms and innovations is becoming an increasingly hot topic, and two excellent posts with different perspectives have recently appeared. Stuart Shieber’s Public Underwriting of Research and Open Access offers a convincing case for open access to research that reminded me of John Willinsky’s keynote address during Virginia Tech’s Open Access Week. Counting up the ways that research is subsidized results in a truly stunning number, and Shieber makes a solid argument for public funding. Cameron Neylon, on the other hand, notes that much of the innovation in scholarly communication comes from the for-profit sector, yet non-profit status is needed to to retain control and prevent diverging interests. So how should we go about funding innovation in scholarly communication? Perhaps OA projects could benefit from socially responsible investing?
One innovation in need of funding is open peer review platforms like LIBRE, which just announced that it is in beta testing. While I like the diversity of opinion that open review makes possible, I think there still may be a role for anonymity, and I’m also skeptical of the invite-your-own-reviewers model. Although it has been around for a while, I only recently discovered a community-edited Google document of standalone peer review platforms, and was surprised by how many there are. I think it would be great if one day I could upload a paper to VTechWorks, have it openly reviewed, and then submit it in my tenure and promotion dossier as a peer-reviewed paper. Then evaluation would have to focus on article quality rather than journal prestige or impact factor.
So few accounts of the publishing process appear that one in my own field of library and information science is definitely worth mention. Catherine Pellegrino’s Walking the walk may be trickier than it first appears: An open access publishing story relates her assessment of publishing venues while feeling the stress of needing to publish. This OA-conscious assessment, and her negotiation to retain copyright, serves as a worthy model for librarians (and non-librarians).
The University Libraries at Virginia Tech is now supporting two innovative open access efforts, Knowledge Unlatched and PeerJ. Knowledge Unlatched enables open access for books in the humanities and social sciences, while PeerJ is an open access journal in the life sciences.
Open access journals are hardly new, but PeerJ is pioneering a new pricing model that dispenses with article processing charges (APCs) in the thousands of dollars. Instead, it charges for lifetime memberships in three tiers. The University Libraries is now automatically covering these fees for Virginia Tech authors. The fees are slightly different since payment only occurs upon article acceptance, and there is a discount for purchasing memberships in bulk. Prices are radically lower than the APCs charged by other journals, and PeerJ has received positive reviews, especially for its fast peer review process. We hope our authors in the biological, medical, and health sciences will benefit from this arrangement.
The University Libraries is also a charter member of Knowledge Unlatched and provided support for its pilot collection of 28 open access monographs (at this writing 22 have been made available). PDFs of the books will be available (with no DRM) under a Creative Commons license. The project benefits all involved, and the Featured Authors section is particularly worth reading. Given the strain that scholarly monograph publishing has been under in recent years, Knowledge Unlatched and other open monograph initiatives have the potential to begin turning things around. While this support for KU does not provide direct aid to Virginia Tech authors, it does reduce the pressure on academic presses, and hopefully more books in the humanities and social sciences can be published.
Reclaiming Fair Use: How to Put Balance Back in Copyright by Patricia Aufderheide and Peter Jaszi was published by the University of Chicago Press in 2011. It’s a well-written history of fair use interpretation and an important corrective to over-cautiousness in asserting user rights. Fair use is a provision of U.S. copyright law that, broadly speaking, allows use of copyrighted works when the social benefit is greater than the owner’s loss. The law sets out four factors which are used to determine whether fair use can be employed: the nature of the use, the nature of the work used, the extent of the use, and its potential economic effect. But since there is no bright line or definitive calculation of the four factors (and other factors which may have bearing), the effect has been limiting (p. xi):
We saw that when people do not understand the law, when they are constantly afraid that they might get caught for referring to copyrighted culture- whether an image, or a phrase of a song, or a popular cartoon character- they can’t do their best work.
Aufderheide and Jaszi feel that the four factors (and checklists based on them) have been hindrance (p. 183):
People love checklists, because they hope that the lists will do their fair-use reasoning for them. But checklists tend to be more trouble than help. Sometimes a checklist simply discourages fair use in situations where the user might have an adequate rationale not captured by the list. More often, checklists simply lead to further confusion. Focused on the four factors, they treat the factors as if they had a concreteness that they do not. Those four factors have been widely interpreted by judges over the years.
Instead they distill fair use evaluation into three questions (p. 24 and 135):
Was the use of copyrighted material for a different purpose, rather than just reuse for the original purpose? Was the amount of material taken appropriate to the purpose of the use? Was it reasonable within the field or discipline it was made in?
The first and third questions are especially important in the revitalization of fair use. While copyright has become “long and strong” in recent decades, fair use has made a comeback since the late 1990s to lend the law more balance. Fair use interpretations have been primarily strengthened in two ways: first through the concept of transformativeness (use for a different purpose than originally intended), and more recently through development of codes of practice for particular fields. Both are now major considerations by courts (p. 80). Aufderheide and Jaszi have been leaders in developing best practices for various communities, first with documentary filmmakers (a process related in Chapter 7) and most recently as contributors to initial work toward a code of practice for the visual arts (PDF).
Codes of best practice “represent a common understanding in a community of practice” (p. 120) and emphasize demonstrating good faith (e.g. through attribution). The codes developed thus far are in agreement on three areas of fair use: critique, illustration, and incidental capture. The codes are also balanced in the sense that the communities (e.g. documentary filmmakers) are often creators as well, so they must take into account how their own work might be used. Aufderheide and Jaszi emphasize that, like a muscle, fair use is strengthened by use– it is one arena in which behavior affects the law, not vice-versa. In addition to communities of practice, the law provides exceptions for certain kinds of use, such as the educational exemptions in Section 110-1 and 110-2.
While the authors champion fair use, they are clear about the problems that remain. In the digital environment, many works are leased rather than owned, and contracts may include language limiting fair use rights. The Digital Millennium Copyright Act (DMCA) of 1998 made it illegal to override digital encryption, so exercising one’s fair use rights becomes impossible. Reliance on the courts to interpret fair use has its disadvantages, and one casualty has been music sampling. The interaction of three court cases has severely limited fair use for music (p. 90-93). Formal copyright registration entitles owners to statutory damages, and the potential maximum has a chilling effect (p. 32). The courts have also expanded secondary liability. The authors call for for advocacy on DMCA reform as well as on orphan works.
Aufderheide and Jaszi are unexpectedly critical of free-culture and commons advocates. They indict free-culture activists for making copyright the villain (p. 48) and seeking alternatives elsewhere rather than acknowledging balancing effects of copyright law such as fair use (p. 54):
The commons rhetoric… celebrates a particular vision of the public domain as a space entirely free of intellectual property constraint, while either ignoring or slighting exemptions and balancing features that limit copyright owners’ monopoly control.
Yet the commons is growing steadily, and search engines now allow users to filter images by license. And in their discussion of the public domain (p. 141), the authors fail to mention the Creative Commons Zero (CC0) license for intentionally placing works in the public domain. While commons advocates may have overlooked fair use, the unnecessary distinction between the two approaches is contradicted by the authors’ own work on a code of practice for OpenCourseWare, which relies on both open licensing and fair use.
The international environment for fair use is covered in Chapter 10. Most countries lack a fair use provision, but have a much lower risk of litigation and lack statutory damages for infringement. Because fair use is the exception rather than the rule, harmonization of copyright through treaties is a continuing threat to it.
Fair use is deliberately vague, and always a case-by-case decision. To Aufderheide and Jaszi, this is a feature, not a bug (p. 163):
Creators benefit from the fact that the copyright law does not exactly specify how to apply fair use…. Fair use is flexible; it is not uncertain or unreliable.
Reclaiming Fair Use features inset boxes throughout the text, “Fair Use: You Be The Judge” (with answers at the back) and “True Tales of Fair Use,” and has five useful appendices, including a template for a code of best practices and a section on myths and realities of fair use. While it contains more background than some readers may desire (they can go straight to Chapter 9, “How To Fair Use”), this book is a valuable perspective on fair use and always interesting and well-written.
More information about fair use, including codes of practice, can be found at the Center for Media & Social Impact at American University, which Aufderheide co-directs. In addition, Jaszi provided testimony on fair use to a House of Representatives subcommittee in January (his testimony begins at 39:00 in the video, and his written submission is available in PDF).
The Chronicle of Higher Education‘s Vitae site has a post today titled Should You Share Your Research on Academia.edu? Research networking sites may provide services that researchers value– I don’t know because I haven’t signed up for any of them– but they do not provide open access. In a recent post, Beyond Elsevier, I mentioned that Academia.edu has the only copy of this paper I was looking for. While it is readable on the screen, if you click the “Download” button, you are prompted to sign in. This is not an open access paper. Open access does not require signing in or downloading software, and it enables uses beyond reading. The Budapest Open Access Initiative states:
By “open access” to [peer-reviewed research literature], we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.
This paper is essentially being used as bait to sign up new users (if you want do anything other than read a long scroll through small text). Personally, I would not want my work used as an enticement to attract new members to a for-profit site without a business model. We can predict that these sites will find a way to monetize personal information, which raises the question of whether this is a good example for researchers to set for graduate students and future scholars.
The marketing pitches of these sites should be taken with more than a few grains of salt. Given the many, many existing institutional and disciplinary repositories that are already providing full open access, their talk of “sharing” and “dissemination” are marketing Kool-Aid. They may not have paywalls, but they do have log-in walls, and those are a barrier for anyone who does not want to trade their privacy for access. Additionally, some of the services treated in a “gee whiz” manner in the Chronicle article, such as statistics on views and downloads, have been available in most repositories for years.
I hope that those wanting to take advantage of the networking capabilities on these sites will also post their work on the open web, preferably in an institutional or disciplinary repository. The private sector is again in the lead in providing services, though it should be remembered that the privatization of knowledge typically hasn’t turned out well (and remember, Mendeley is now owned by Elsevier). Eventually, non-market research networking options will appear and (I hope) disintermediate these private silos.
Last week there was a flurry of exchanges on copyright and author manuscripts, unintentionally set off by Kevin Smith’s clarifying post Setting the record straight about Elsevier. I had thought that my right to archive, given in the publishing contracts I have signed, also allowed me (implicitly) to assign whatever license I liked to my own versions. Smith (and others) make it clear that a copyright transfer applies to all article versions. So you can archive your article if permitted, but you should attach the publisher’s copyright statement, and you are not free to attach a Creative Commons license. I’m currently in the process of correcting this for my archived articles, to which I erroneously assigned a CC-BY license. And I have updated my CC-BY recommendation in the previous post on the Elsevier fallout to make it clear that this can’t be done if the copyright has been transferred.
These posts reinforce the importance of retaining copyright whenever possible. But the fact remains that this is not always easy to do. The suggestions of some to “never sign over copyright” or “just put it in the public domain” I don’t find very helpful. In my niche of information science, there are very few OA journals, and most are owned by the large multinational conglomerates. While I have transferred copyright in all of my peer-reviewed articles, I have archived all of the post-prints. In the one case in which I attempted to retain copyright, the journal simply refused (and my co-authors did not seem particularly interested in putting up a fight). Placing an article in the public domain, it seems to me, would likely result in journal refusal (if I remember correctly, on most copyright transfer forms this option is only available to federal government employees). Additionally, since the public domain does not require attribution, most authors would not want to explicitly give that up.
Tenure-track faculty are under pressure to publish, and copyright transfer occurs at the end of a very lengthy process. Not many authors will be willing to start this process over if they can’t come to agreement with the journal about copyright. If authors are doing their best to make open the default, then they shouldn’t be made to feel badly about copyright transfer, particularly in cases where they can provide access through archiving. And if they are willing to negotiate for that right where it is not given, so much the better. But sometimes we have co-authors who are more interested in publication than copyright or archiving. So it’s more important than ever to address these issues in advance: to identify an OA journal (or one that explicitly allows archiving), and to ensure that co-authors are in agreement well before time to sign a publication agreement. Until more OA journals are developed in more fields, that is the best we can ask for.
Elsevier can send takedown notices since it owns the articles in its journals. It owns the articles because authors who publish in Elsevier journals sign away their copyright before publication. The license agreement allows for archiving of the author’s version, but not the journal’s published PDF. Authors should avoid posting the published version of their articles as a general rule, though a few publishers do allow it.
Here are my suggestions for avoiding this problem:
If you can’t publish in an open access journal, check a journal’s archiving policy in advance by searching it in SHERPA/RoMEO.
Read the fine print regardless of where you are publishing. This is not like a software license where everyone just clicks “I Agree.” This is your work, so read licenses carefully. Copyright transfer gives complete ownership to the publisher, and your rights are limited to those listed in the license agreement.
Archive your post-print if possible, since it is your final version incorporating changes from the peer review process. If not allowed, post the pre-print. Archive in a repository where your article is immediately accessible, such as VTechWorks. Research networking sites require membership (Academia.edu) and/or software download (Mendeley) that are barriers to immediate access.
Make your archived version easy to read and reuse. If double spaced, revert to single spaced, and insert tables and figures in the appropriate places. Consider archiving your data as well so your work can be replicated and incorporated into larger studies. Attach a Creative Commons license to make it clear you are explicitly allowing reuse. [Update: if you transferred copyright you likely cannot assign a CC license- see discussions by Kevin Smith, Michael Carroll, and Charles Oppenheim.]
If you have co-authors, come to agreement early on publishing venues and archiving so you don’t get locked into a result you don’t like. Remember that typically one author signs for all authors, so that person must understand group wishes.
Above I briefly touch upon the fact that research networking sites do not provide open access, which is an aspect of this controversy I haven’t seen mentioned. By coincidence, at the time this became news I was searching for articles about DSpace and linked data and I found this article on Academia.edu. If you take a look, you’ll see that this article isn’t downloadable or printable without becoming a member of Academia.edu. All you can do is try to read the small print. Which, in my case, was enough to make me realize that I didn’t need it. But what if I did? This article isn’t available anywhere else.
Academia.edu added gasoline to the fire by taking such a combative (and calculated) attitude toward Elsevier in its own notice to users, linking to the Cost of Knowledge boycott and extolling its own support for open access (“Academia.edu is committed to enabling the transition to a world where there is open access to academic literature. Elsevier takes a different view…”). The e-mail signature of Richard Price, the CEO of Academia.edu, says “The goal of Academia.edu is to get every science PDF ever written on the internet, accessible for free.” I’m sure that would be good for Academia.edu, which is a for-profit business with an absurd domain name. Your participation on research networking sites will be monetized one way or another. If your article is available only on a research networking site, like the author above, do you want your work being used to attract members to a for-profit endeavor? Pro-open access statements by such companies should be considered with healthy skepticism, and in some cases they are just plain openwashing.
Most importantly, Academia.edu, ResearchGate, Mendeley (now owned by Elsevier) and others do not provide open access. Sign-up should not be required for access. Software download, in the case of Mendeley, should not be required for access. These services do not meet the definition of open access established by the Budapest Open Access Initiative:
By “open access” to [peer-reviewed research literature], we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.
The point of this is not to be rigidly ideological for its own sake. It’s important to know what the term “open access” really means, otherwise it will get co-opted for private uses. If you choose to use a research networking service, please make sure you also provide a copy of your article to an institutional or disciplinary repository where it can be found and downloaded on the open internet.
One participant in our faculty and graduate student panels during Open Access Week at Virginia Tech was Josh Nicholson, founder of a new open access journal, The Winnower. Josh is a PhD candidate at Virginia Tech studying the role of the karyotype in cancer initiation and progression in the lab of Dr. Daniela Cimini. The Winnower will serve the sciences, with sections for different disciplines as well as a science and society section. The new journal will launch in January or February, and is currently looking for beta testers. Some buzz has already been created through a post on the AAAS blog and a Q&A, and The Winnower is active on Twitter and Facebook.
Our e-mail interview occurred over several weeks. The questions and answers below have not been edited, except to add an occasional link. I have also grouped similar topics together, so the questions are no longer in their original order.
How did The Winnower come about?
Ever since I began publishing science articles I have asked: does the publication system make sense? The short answer has always been: no. I think most scientists who have published an article would agree with this but they are often too involved in playing the so-called “tenure games” to do anything about it. Well I don’t have to worry about tenure yet (if ever) so I can focus on the problem at hand and try and actually do something about it.
Why the name?
The Winnower is a tool used to separate the good from the bad. This is a main objective of The Winnower, to identify good pieces of research from flawed pieces based on open post-publication review.
Will the journal use a Creative Commons license or allow authors to choose?
Content published with The Winnower will be licensed under a CC BY license.
Will the journal be able to accommodate data as well?
The journal will accommodate data but should be presented in the context of a paper. The Winnower should not act as a forum for publishing data sets alone. It is our feeling that data in absence of theory is hard to interpret and thus may cause undue noise to the site.
Will there be any screening process before an article appears?
No, articles can be posted on The Winnower immediately. This should not be taken as an endorsement that they are correct but rather a signal that they need to be reviewed, much like the “preprint” system. Of course, articles that are reviewed will be easily distinguishable from those that are not. The site is designed to encourage reviews of papers, indeed it is why we are called The Winnower: to separate the bad from the good. To limit possible spamming of the system as well as to sustain The Winnower there will a charge of 100 dollars per publication.
How will you accommodate the need for fast review in an open peer review system?
The Winnower will strictly utilize open review. This means that all publications will be open to review and all reviews will be open to read. Publication in The Winnower will occur immediately after submission and reviews will be open for variable amounts of time so that authors can make edits based on the reviews. It should be noted that papers will always be open for review so that a paper can accumulate reviews throughout its lifetime. Reviews can be solicited from peers upon submission and reviewed by The Winnower community we hope to build. Based on the system we are building we believe the number of reviews should reflect the number of times the work is read.
At what point can an author say that a paper has been peer-reviewed?
An author can say it has been peer-reviewed as soon as a paper receives a review. But we hesitate to say that this paper has passed peer review because doing this causes some problems. Indeed, as you may be aware all work published now has “passed” peer review but that has done nothing to limit the high rates of irreproducibility. In fact, it may be a cause of it. We want to change the conversation from “passing” peer review to what is the percent confidence scientists have in this paper. To accomplish this we will be implementing semi-structured reviews (i.e. turning reviews into a measurable quantity).
How will reviews be a “measurable quantity”?
Much like reviews are performed by the National Institutes of Health scoring will be implemented for different criterion. Obviously there will be no way to score free form reviews but various questions can be assigned a numerical score. PLOS Labs is working on establishing structured reviews and we have talked with them about this. We think it would be great if there is an industry standard to use for structured reviews, but until then we will implement the best system that we can think of.
What do you mean when you say that peer review can be a cause of irreproducibility?
Peer review, as it stands now, is more or less a pass/fail system. So, if you design 4 experiments to test a hypothesis and only 3 confirm your hypothesis you are likely to leave out the 1 experiment that did not fit your hypothesis in order to pass peer review. The problem is that ultimately you can’t hide from nature, she will reveal her truth one way or another. If there is no system to pass or fail and you wish your paper to stand the test of time you will include all results, even those that contradict your hypothesis. Moreover, editors are literally selecting for simple studies but very often studies are not simple and results are not 100% clear. If you can’t publish your work because it is honest but poses some questions then eventually you will have to mold your work to what an editor wants and not what the data is telling you. There is a significant correlation between impact factor and misconduct and it is my opinion that much of this stems from researchers bending the truth, even if ever so slightly, to get into these career advancing publications.
How can you ensure that each paper is reviewed, or receives enough reviews?
Authors when submitting their research will be encouraged to invite reviewers directly to review their paper. Some may argue this will allow authors to invite their friends and the reviews will be biased. We think the transparency of reviews will limit this from happening. In addition to authors driving reviews to the site each article will display a prominent “write a review” button.
Isn’t bias often hidden? For example, if a submitter invites friends to review, wouldn’t that relationship be invisible to readers, and reviewers could go easy on criticism and exaggerate praise?
This is certainly a possible problem that could arise but it is not anything new with our system. Currently, scientists are allowed to suggest those that should and should not review their papers. Indeed, you heard this blatantly revealed during Open Access Week by a researcher [Note: Josh is referring to the faculty panel during which Dr. Good said some journals prompt authors to suggest reviewers]. Arguably there is an editor to limit any bias but the editor themselves could be biased one way or another. While The Winnower won’t eliminate bias (we are humans, after all) the content of the reviews can be evaluated by all because they will be readily accessible. [Note: reviewers could list competing interests in the template suggested on The Winnower's blog.]
You recently wrote a blog post “Sexism in Science” that cites an article advocating abandoning secrecy. But other research concludes that double-blind review is best, and since even the article you cite mentions other studies in which female representation is better when gender is unknown, wouldn’t double-blind review do a better job of eliminating sexism?
Double blind review is indeed better than single blind review in regards to eliminating sexism in science but this does not mean that it is the best. As far as I am aware there has been no test between open review and double blind review. Any instances of sexism that do occur in open review can be addressed and fixed because they can be exposed unlike closed review. In the Sexism in Science blog I discuss a few cases in which blatant examples of sexism in science occur. In the end many have been remedied because of the open dialogue that occurs on the internet.
Does open peer review mean that all authors and reviewers must reveal their real names?
How will you ensure that reviewers are using their real names?
This is not easy, but we think with the system that we are building reviewers will want to use their real names. Reviews will be assigned DOIs and over time we hope to put the reviews on the same level as the research. Indeed, I can imagine researchers that specialize in reviewing and being rewarded for doing so. Full time Winnowers, if you will. But regardless if a reviewer uses their real name or not , the transparency of reviews will discourage personal/inappropriate reviews. It is the serious criticisms/reviews that will be difficult for authors to respond to. I strongly believe that if you’re scared of open peer review then we should be scared of your results.
Do you plan to use altmetrics on the site?
Yes, we will use various metrics on the site, including altmetrics. We want to shift the focus from the journal to the article itself and we think employing various article-level metrics is the best way to do this.
Have you decided on an altmetrics service and will some revenue go toward that?
Yes, we will be using Altmetric and yes some of the revenue will indeed go towards that.
At what point does payment occur, and are you concerned with the possible perception that this is pay-to-publish?
Payment occurs as soon as you post your paper online. I am not overly concerned with the perception that this is pay-to-publish because it is. What makes The Winnower different is the price we charge. Our price is much much lower than what other journals charge and we are clear as to what its use will be: the sustainability and growth of the website. arXiv, a site we are very much modeled after does not charge anything for their preprint service but I would argue their sustainability based on grants is questionable. We believe that authors should buy into this system and we think that the price we will charge is more than fair. Ultimately, if a critical mass is reached in The Winnower and other revenue sources can be generated than we would love to make publishing free but at this moment it is not possible.
From what funds do you think most scientists will pay the $100 fee?
I believe that most academic scientists will pay the $100 fee with grant money. If they do not currently have grant money the fees could theoretically be paid for by departmental funds or even personal funds.
Is The Winnower a for-profit or non-profit enterprise, and are you registered as such?
The Winnower is a for-profit limited liability company.
Is there a preservation plan for the content in case the journal does not continue?
Is it possible for an author (or journal staff) to withdraw an article?
Yes, it is possible to withdraw an article and it is also possible for us to retract the article if necessary.
Since many scientists do need to play “tenure games”, wouldn’t the Winnower’s lack of indexing, impact factor, etc. serve as a disincentive to submit or review?
Yes, this is certainly an obstacle The Winnower will have to face but it is not only an obstacle for The Winnower rather it is an obstacle for the entire scientific community. We think we need to get away from judging scientists based upon IF or other measures of prestige and we are not alone. The San Francisco Declaration on Research Assessment (SF DORA), which has been signed by nearly 10,000 researchers and publishers in less than a year, calls for new ways to evaluate researchers. As the community moves away from journal-level metrics and into article-level metrics The Winnower should be well positioned to thrive. Indeed, we will utilize many article-level metrics as well as information from reviews themselves.
With most journals, if I submit a paper that is rejected, that information is private and I can re-submit elsewhere. In open review, with a negative review one can publicly lose face as well as lose the possibility of re-submitting the paper. Won’t this be a significant disincentive to submit?
This is precisely what we are trying to change. Currently, scientists can submit a paper numerous times, receive numerous negative reviews and ultimately publish their paper somewhere else after having “passed” peer review. If scientists prefer this system then science is in a dangerous place. By choosing this model, we as scientists are basically saying we prefer nice neat stories that no one will criticize. This is silly though because science, more often than not, is not neat and perfect. The Winnower believes that transparency in publishing is of the utmost importance. Going from a closed anonymous system to an open system will be hard for many scientists but I believe that it is the right thing to do if we care about the truth.
Is there anything else you would like to add?
The Winnower will also feature two sections called “The Grain” and “The Chaff.” The Grain will be short essays by authors of papers that have received 1,000 citations or have passed a specific Altmetric score. In these essays authors will describe the work and the story behind the work (i.e. was it initially rejected, was it funded, where did the idea come from etc.). They will be very similar to the former series Citation Classics run by Dr. Eugene Garfield. Indeed, Dr. Garfield has expressed much enthusiasm for The Winnower to pursue this. In parallel, we will be launching a section called The Chaff that highlights retracted papers. These papers will be written by authors of retracted papers in order to really find out why studies failed or what led to the authors to fabricate data etc. We want to position papers published in The Chaff in a non-accusatory manner so that we may learn from these papers. The Chaff will not be a forum to castigate authors of retracted papers.