Library Support for New Open Access Business Models

The University Libraries at Virginia Tech is now supporting two innovative open access efforts, Knowledge Unlatched and PeerJ. Knowledge Unlatched enables open access for books in the humanities and social sciences, while PeerJ is an open access journal in the life sciences.

Open access journals are hardly new, but PeerJ is pioneering a new pricing model that dispenses with article processing charges (APCs) in the thousands of dollars. Instead, it charges for lifetime memberships in three tiers. The University Libraries is now automatically covering these fees for Virginia Tech authors. The fees are slightly different since payment only occurs upon article acceptance, and there is a discount for purchasing memberships in bulk. Prices are radically lower than the APCs charged by other journals, and PeerJ has received positive reviews, especially for its fast peer review process. We hope our authors in the biological, medical, and health sciences will benefit from this arrangement.

The University Libraries is also a charter member of Knowledge Unlatched and provided support for its pilot collection of 28 open access monographs (at this writing 22 have been made available). PDFs of the books will be available (with no DRM) under a Creative Commons license. The project benefits all involved, and the Featured Authors section is particularly worth reading. Given the strain that scholarly monograph publishing has been under in recent years, Knowledge Unlatched and other open monograph initiatives have the potential to begin turning things around. While this support for KU does not provide direct aid to Virginia Tech authors, it does reduce the pressure on academic presses, and hopefully more books in the humanities and social sciences can be published.

Posted in Business Models, Open Books, University Libraries at Virginia Tech | Tagged , | Leave a comment

Book Review: Reclaiming Fair Use

Reclaiming Fair Use Reclaiming Fair Use: How to Put Balance Back in Copyright by Patricia Aufderheide and Peter Jaszi was published by the University of Chicago Press in 2011. It’s a well-written history of fair use interpretation and an important corrective to over-cautiousness in asserting user rights. Fair use is a provision of U.S. copyright law that, broadly speaking, allows use of copyrighted works when the social benefit is greater than the owner’s loss. The law sets out four factors which are used to determine whether fair use can be employed: the nature of the use, the nature of the work used, the extent of the use, and its potential economic effect. But since there is no bright line or definitive calculation of the four factors (and other factors which may have bearing), the effect has been limiting (p. xi):

We saw that when people do not understand the law, when they are constantly afraid that they might get caught for referring to copyrighted culture- whether an image, or a phrase of a song, or a popular cartoon character- they can’t do their best work.

Aufderheide and Jaszi feel that the four factors (and checklists based on them) have been hindrance (p. 183):

People love checklists, because they hope that the lists will do their fair-use reasoning for them. But checklists tend to be more trouble than help. Sometimes a checklist simply discourages fair use in situations where the user might have an adequate rationale not captured by the list. More often, checklists simply lead to further confusion. Focused on the four factors, they treat the factors as if they had a concreteness that they do not. Those four factors have been widely interpreted by judges over the years.

Instead they distill fair use evaluation into three questions (p. 24 and 135):

Was the use of copyrighted material for a different purpose, rather than just reuse for the original purpose? Was the amount of material taken appropriate to the purpose of the use? Was it reasonable within the field or discipline it was made in?

The first and third questions are especially important in the revitalization of fair use. While copyright has become “long and strong” in recent decades, fair use has made a comeback since the late 1990s to lend the law more balance. Fair use interpretations have been primarily strengthened in two ways: first through the concept of transformativeness (use for a different purpose than originally intended), and more recently through development of codes of practice for particular fields. Both are now major considerations by courts (p. 80). Aufderheide and Jaszi have been leaders in developing best practices for various communities, first with documentary filmmakers (a process related in Chapter 7) and most recently as contributors to initial work toward a code of practice for the visual arts (PDF).

Codes of best practice “represent a common understanding in a community of practice” (p. 120) and emphasize demonstrating good faith (e.g. through attribution). The codes developed thus far are in agreement on three areas of fair use: critique, illustration, and incidental capture. The codes are also balanced in the sense that the communities (e.g. documentary filmmakers) are often creators as well, so they must take into account how their own work might be used. Aufderheide and Jaszi emphasize that, like a muscle, fair use is strengthened by use– it is one arena in which behavior affects the law, not vice-versa. In addition to communities of practice, the law provides exceptions for certain kinds of use, such as the educational exemptions in Section 110-1 and 110-2.

While the authors champion fair use, they are clear about the problems that remain. In the digital environment, many works are leased rather than owned, and contracts may include language limiting fair use rights. The Digital Millennium Copyright Act (DMCA) of 1998 made it illegal to override digital encryption, so exercising one’s fair use rights becomes impossible. Reliance on the courts to interpret fair use has its disadvantages, and one casualty has been music sampling. The interaction of three court cases has severely limited fair use for music (p. 90-93). Formal copyright registration entitles owners to statutory damages, and the potential maximum has a chilling effect (p. 32). The courts have also expanded secondary liability. The authors call for for advocacy on DMCA reform as well as on orphan works.

Aufderheide and Jaszi are unexpectedly critical of free-culture and commons advocates. They indict free-culture activists for making copyright the villain (p. 48) and seeking alternatives elsewhere rather than acknowledging balancing effects of copyright law such as fair use (p. 54):

The commons rhetoric… celebrates a particular vision of the public domain as a space entirely free of intellectual property constraint, while either ignoring or slighting exemptions and balancing features that limit copyright owners’ monopoly control.

Yet the commons is growing steadily, and search engines now allow users to filter images by license. And in their discussion of the public domain (p. 141), the authors fail to mention the Creative Commons Zero (CC0) license for intentionally placing works in the public domain. While commons advocates may have overlooked fair use, the unnecessary distinction between the two approaches is contradicted by the authors’ own work on a code of practice for OpenCourseWare, which relies on both open licensing and fair use.

The international environment for fair use is covered in Chapter 10. Most countries lack a fair use provision, but have a much lower risk of litigation and lack statutory damages for infringement. Because fair use is the exception rather than the rule, harmonization of copyright through treaties is a continuing threat to it.

Fair use is deliberately vague, and always a case-by-case decision. To Aufderheide and Jaszi, this is a feature, not a bug (p. 163):

Creators benefit from the fact that the copyright law does not exactly specify how to apply fair use…. Fair use is flexible; it is not uncertain or unreliable.

Reclaiming Fair Use features inset boxes throughout the text, “Fair Use: You Be The Judge” (with answers at the back) and “True Tales of Fair Use,” and has five useful appendices, including a template for a code of best practices and a section on myths and realities of fair use. While it contains more background than some readers may desire (they can go straight to Chapter 9, “How To Fair Use”), this book is a valuable perspective on fair use and always interesting and well-written.

More information about fair use, including codes of practice, can be found at the Center for Media & Social Impact at American University, which Aufderheide co-directs. In addition, Jaszi provided testimony on fair use to a House of Representatives subcommittee in January (his testimony begins at 39:00 in the video, and his written submission is available in PDF).

Reclaiming Fair Use is available as an e-book through the University Libraries.

Posted in Book Reviews, Fair Use | Tagged , | Leave a comment

Research Networking Sites and Open Access

The Chronicle of Higher Education‘s Vitae site has a post today titled Should You Share Your Research on Academia.edu? Research networking sites may provide services that researchers value– I don’t know because I haven’t signed up for any of them– but they do not provide open access. In a recent post, Beyond Elsevier, I mentioned that Academia.edu has the only copy of this paper I was looking for. While it is readable on the screen, if you click the “Download” button, you are prompted to sign in. This is not an open access paper. Open access does not require signing in or downloading software, and it enables uses beyond reading. The Budapest Open Access Initiative states:

By “open access” to [peer-reviewed research literature], we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.

This paper is essentially being used as bait to sign up new users (if you want do anything other than read a long scroll through small text). Personally, I would not want my work used as an enticement to attract new members to a for-profit site without a business model. We can predict that these sites will find a way to monetize personal information, which raises the question of whether this is a good example for researchers to set for graduate students and future scholars.

The marketing pitches of these sites should be taken with more than a few grains of salt. Given the many, many existing institutional and disciplinary repositories that are already providing full open access, their talk of “sharing” and “dissemination” are marketing Kool-Aid. They may not have paywalls, but they do have log-in walls, and those are a barrier for anyone who does not want to trade their privacy for access. Additionally, some of the services treated in a “gee whiz” manner in the Chronicle article, such as statistics on views and downloads, have been available in most repositories for years.

Academia.edu is hardly the only research networking site (since none of its competitors are mentioned, was there a quid pro quo between the Chronicle and Academia.edu?). If colleagues in your field are members of different “silos” such as Mendeley or ResearchGate, do you need to join all of them, with their various terms of use and privacy policies? The existence of these silos undermines their claims of “sharing” and “dissemination”– activities that they are clearly not providing on a network level.

I hope that those wanting to take advantage of the networking capabilities on these sites will also post their work on the open web, preferably in an institutional or disciplinary repository. The private sector is again in the lead in providing services, though it should be remembered that the privatization of knowledge typically hasn’t turned out well (and remember, Mendeley is now owned by Elsevier). Eventually, non-market research networking options will appear and (I hope) disintermediate these private silos.

Posted in Business Models, Repositories, Self-Archiving | Tagged , | 1 Comment

Copyright and Article Archiving

Last week there was a flurry of exchanges on copyright and author manuscripts, unintentionally set off by Kevin Smith’s clarifying post Setting the record straight about Elsevier. I had thought that my right to archive, given in the publishing contracts I have signed, also allowed me (implicitly) to assign whatever license I liked to my own versions. Smith (and others) make it clear that a copyright transfer applies to all article versions. So you can archive your article if permitted, but you should attach the publisher’s copyright statement, and you are not free to attach a Creative Commons license. I’m currently in the process of correcting this for my archived articles, to which I erroneously assigned a CC-BY license. And I have updated my CC-BY recommendation in the previous post on the Elsevier fallout to make it clear that this can’t be done if the copyright has been transferred.

Smith followed up with two posts (It’s the content, not the version! and So what about self-archiving?), Nancy Sims posted, and Michael Carroll addressed this issue back in 2006. All are worth reading.

These posts reinforce the importance of retaining copyright whenever possible. But the fact remains that this is not always easy to do. The suggestions of some to “never sign over copyright” or “just put it in the public domain” I don’t find very helpful. In my niche of information science, there are very few OA journals, and most are owned by the large multinational conglomerates. While I have transferred copyright in all of my peer-reviewed articles, I have archived all of the post-prints. In the one case in which I attempted to retain copyright, the journal simply refused (and my co-authors did not seem particularly interested in putting up a fight). Placing an article in the public domain, it seems to me, would likely result in journal refusal (if I remember correctly, on most copyright transfer forms this option is only available to federal government employees). Additionally, since the public domain does not require attribution, most authors would not want to explicitly give that up.

Tenure-track faculty are under pressure to publish, and copyright transfer occurs at the end of a very lengthy process. Not many authors will be willing to start this process over if they can’t come to agreement with the journal about copyright. If authors are doing their best to make open the default, then they shouldn’t be made to feel badly about copyright transfer, particularly in cases where they can provide access through archiving. And if they are willing to negotiate for that right where it is not given, so much the better. But sometimes we have co-authors who are more interested in publication than copyright or archiving. So it’s more important than ever to address these issues in advance: to identify an OA journal (or one that explicitly allows archiving), and to ensure that co-authors are in agreement well before time to sign a publication agreement. Until more OA journals are developed in more fields, that is the best we can ask for.

Posted in Copyright, Self-Archiving | Tagged , , | Comments Off

Beyond Elsevier

Elsevier has been sending takedown notices to any site hosting the final PDF version of its journal articles. The takedowns first became apparent on Academia.edu. Mike Taylor was one of the first to blog about it, takedown recipient Guy Leonard blogged about it, and there’s a link roundup on Confessions of a Science Librarian. Later it became clear that the takedown notices were more wide-ranging, going to hosting services like WordPress as well as universities. The blowback was enough to prompt a response from Elsevier.

Elsevier can send takedown notices since it owns the articles in its journals. It owns the articles because authors who publish in Elsevier journals sign away their copyright before publication. The license agreement allows for archiving of the author’s version, but not the journal’s published PDF. Authors should avoid posting the published version of their articles as a general rule, though a few publishers do allow it.

Here are my suggestions for avoiding this problem:

  • Publish in an open access journal (see the Directory of Open Access Journals for a list by discipline). Many require only a license to publish, rather than a copyright transfer, and use a Creative Commons license.
  • If you can’t publish in an open access journal, check a journal’s archiving policy in advance by searching it in SHERPA/RoMEO.
  • Read the fine print regardless of where you are publishing. This is not like a software license where everyone just clicks “I Agree.” This is your work, so read licenses carefully. Copyright transfer gives complete ownership to the publisher, and your rights are limited to those listed in the license agreement.
  • Archive your post-print if possible, since it is your final version incorporating changes from the peer review process. If not allowed, post the pre-print. Archive in a repository where your article is immediately accessible, such as VTechWorks. Research networking sites require membership (Academia.edu) and/or software download (Mendeley) that are barriers to immediate access.
  • Make your archived version easy to read and reuse. If double spaced, revert to single spaced, and insert tables and figures in the appropriate places. Consider archiving your data as well so your work can be replicated and incorporated into larger studies. Attach a Creative Commons license to make it clear you are explicitly allowing reuse. [Update: if you transferred copyright you likely cannot assign a CC license- see discussions by Kevin Smith, Michael Carroll, and Charles Oppenheim.]
  • If you have co-authors, come to agreement early on publishing venues and archiving so you don’t get locked into a result you don’t like. Remember that typically one author signs for all authors, so that person must understand group wishes.
  • Learn about and download the author addendum which allows you to reserve rights, or use the addendum engine.

Above I briefly touch upon the fact that research networking sites do not provide open access, which is an aspect of this controversy I haven’t seen mentioned. By coincidence, at the time this became news I was searching for articles about DSpace and linked data and I found this article on Academia.edu. If you take a look, you’ll see that this article isn’t downloadable or printable without becoming a member of Academia.edu. All you can do is try to read the small print. Which, in my case, was enough to make me realize that I didn’t need it. But what if I did? This article isn’t available anywhere else.

Academia.edu added gasoline to the fire by taking such a combative (and calculated) attitude toward Elsevier in its own notice to users, linking to the Cost of Knowledge boycott and extolling its own support for open access (“Academia.edu is committed to enabling the transition to a world where there is open access to academic literature. Elsevier takes a different view…”). The e-mail signature of Richard Price, the CEO of Academia.edu, says “The goal of Academia.edu is to get every science PDF ever written on the internet, accessible for free.” I’m sure that would be good for Academia.edu, which is a for-profit business with an absurd domain name. Your participation on research networking sites will be monetized one way or another. If your article is available only on a research networking site, like the author above, do you want your work being used to attract members to a for-profit endeavor? Pro-open access statements by such companies should be considered with healthy skepticism, and in some cases they are just plain openwashing.

Most importantly, Academia.edu, ResearchGate, Mendeley (now owned by Elsevier) and others do not provide open access. Sign-up should not be required for access. Software download, in the case of Mendeley, should not be required for access. These services do not meet the definition of open access established by the Budapest Open Access Initiative:

By “open access” to [peer-reviewed research literature], we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.

The point of this is not to be rigidly ideological for its own sake. It’s important to know what the term “open access” really means, otherwise it will get co-opted for private uses. If you choose to use a research networking service, please make sure you also provide a copy of your article to an institutional or disciplinary repository where it can be found and downloaded on the open internet.

Posted in Commercial Publishers, Self-Archiving | Tagged , | 1 Comment

The Winnower: An Interview with Josh Nicholson

One participant in our faculty and graduate student panels during Open Access Week at Virginia Tech was Josh Nicholson, founder of a new open access journal, The Winnower. Josh is a PhD candidate at Virginia Tech studying the role of the karyotype in cancer initiation and progression in the lab of Dr. Daniela Cimini. The Winnower will serve the sciences, with sections for different disciplines as well as a science and society section. The new journal will launch in January or February, and is currently looking for beta testers. Some buzz has already been created through a post on the AAAS blog and a Q&A, and The Winnower is active on Twitter and Facebook.

Josh Nicholson
Josh Nicholson

Our e-mail interview occurred over several weeks. The questions and answers below have not been edited, except to add an occasional link. I have also grouped similar topics together, so the questions are no longer in their original order.

How did The Winnower come about?

Ever since I began publishing science articles I have asked: does the publication system make sense? The short answer has always been: no. I think most scientists who have published an article would agree with this but they are often too involved in playing the so-called “tenure games” to do anything about it. Well I don’t have to worry about tenure yet (if ever) so I can focus on the problem at hand and try and actually do something about it.

Why the name?

The Winnower is a tool used to separate the good from the bad. This is a main objective of The Winnower, to identify good pieces of research from flawed pieces based on open post-publication review.

Will the journal use a Creative Commons license or allow authors to choose?

Content published with The Winnower will be licensed under a CC BY license.

Will the journal be able to accommodate data as well?

The journal will accommodate data but should be presented in the context of a paper. The Winnower should not act as a forum for publishing data sets alone. It is our feeling that data in absence of theory is hard to interpret and thus may cause undue noise to the site.

Will there be any screening process before an article appears?

No, articles can be posted on The Winnower immediately. This should not be taken as an endorsement that they are correct but rather a signal that they need to be reviewed, much like the “preprint” system. Of course, articles that are reviewed will be easily distinguishable from those that are not. The site is designed to encourage reviews of papers, indeed it is why we are called The Winnower: to separate the bad from the good. To limit possible spamming of the system as well as to sustain The Winnower there will a charge of 100 dollars per publication.

How will you accommodate the need for fast review in an open peer review system?

The Winnower will strictly utilize open review. This means that all publications will be open to review and all reviews will be open to read. Publication in The Winnower will occur immediately after submission and reviews will be open for variable amounts of time so that authors can make edits based on the reviews. It should be noted that papers will always be open for review so that a paper can accumulate reviews throughout its lifetime. Reviews can be solicited from peers upon submission and reviewed by The Winnower community we hope to build. Based on the system we are building we believe the number of reviews should reflect the number of times the work is read.

At what point can an author say that a paper has been peer-reviewed?

An author can say it has been peer-reviewed as soon as a paper receives a review. But we hesitate to say that this paper has passed peer review because doing this causes some problems. Indeed, as you may be aware all work published now has “passed” peer review but that has done nothing to limit the high rates of irreproducibility. In fact, it may be a cause of it. We want to change the conversation from “passing” peer review to what is the percent confidence scientists have in this paper. To accomplish this we will be implementing semi-structured reviews (i.e. turning reviews into a measurable quantity).

How will reviews be a “measurable quantity”?

Much like reviews are performed by the National Institutes of Health scoring will be implemented for different criterion. Obviously there will be no way to score free form reviews but various questions can be assigned a numerical score. PLOS Labs is working on establishing structured reviews and we have talked with them about this. We think it would be great if there is an industry standard to use for structured reviews, but until then we will implement the best system that we can think of.

What do you mean when you say that peer review can be a cause of irreproducibility?

Peer review, as it stands now, is more or less a pass/fail system. So, if you design 4 experiments to test a hypothesis and only 3 confirm your hypothesis you are likely to leave out the 1 experiment that did not fit your hypothesis in order to pass peer review. The problem is that ultimately you can’t hide from nature, she will reveal her truth one way or another. If there is no system to pass or fail and you wish your paper to stand the test of time you will include all results, even those that contradict your hypothesis. Moreover, editors are literally selecting for simple studies but very often studies are not simple and results are not 100% clear. If you can’t publish your work because it is honest but poses some questions then eventually you will have to mold your work to what an editor wants and not what the data is telling you. There is a significant correlation between impact factor and misconduct and it is my opinion that much of this stems from researchers bending the truth, even if ever so slightly, to get into these career advancing publications.

How can you ensure that each paper is reviewed, or receives enough reviews?

Authors when submitting their research will be encouraged to invite reviewers directly to review their paper. Some may argue this will allow authors to invite their friends and the reviews will be biased. We think the transparency of reviews will limit this from happening. In addition to authors driving reviews to the site each article will display a prominent “write a review” button.

Isn’t bias often hidden? For example, if a submitter invites friends to review, wouldn’t that relationship be invisible to readers, and reviewers could go easy on criticism and exaggerate praise?

This is certainly a possible problem that could arise but it is not anything new with our system. Currently, scientists are allowed to suggest those that should and should not review their papers. Indeed, you heard this blatantly revealed during Open Access Week by a researcher [Note: Josh is referring to the faculty panel during which Dr. Good said some journals prompt authors to suggest reviewers]. Arguably there is an editor to limit any bias but the editor themselves could be biased one way or another. While The Winnower won’t eliminate bias (we are humans, after all) the content of the reviews can be evaluated by all because they will be readily accessible. [Note: reviewers could list competing interests in the template suggested on The Winnower's blog.]

You recently wrote a blog post “Sexism in Science” that cites an article advocating abandoning secrecy. But other research concludes that double-blind review is best, and since even the article you cite mentions other studies in which female representation is better when gender is unknown, wouldn’t double-blind review do a better job of eliminating sexism?

Double blind review is indeed better than single blind review in regards to eliminating sexism in science but this does not mean that it is the best. As far as I am aware there has been no test between open review and double blind review. Any instances of sexism that do occur in open review can be addressed and fixed because they can be exposed unlike closed review. In the Sexism in Science blog I discuss a few cases in which blatant examples of sexism in science occur. In the end many have been remedied because of the open dialogue that occurs on the internet.
The Winnower

Does open peer review mean that all authors and reviewers must reveal their real names?

Yes.

How will you ensure that reviewers are using their real names?

This is not easy, but we think with the system that we are building reviewers will want to use their real names. Reviews will be assigned DOIs and over time we hope to put the reviews on the same level as the research. Indeed, I can imagine researchers that specialize in reviewing and being rewarded for doing so. Full time Winnowers, if you will. But regardless if a reviewer uses their real name or not , the transparency of reviews will discourage personal/inappropriate reviews. It is the serious criticisms/reviews that will be difficult for authors to respond to. I strongly believe that if you’re scared of open peer review then we should be scared of your results.

Do you plan to use altmetrics on the site?

Yes, we will use various metrics on the site, including altmetrics. We want to shift the focus from the journal to the article itself and we think employing various article-level metrics is the best way to do this.

Have you decided on an altmetrics service and will some revenue go toward that?

Yes, we will be using Altmetric and yes some of the revenue will indeed go towards that.

At what point does payment occur, and are you concerned with the possible perception that this is pay-to-publish?

Payment occurs as soon as you post your paper online. I am not overly concerned with the perception that this is pay-to-publish because it is. What makes The Winnower different is the price we charge. Our price is much much lower than what other journals charge and we are clear as to what its use will be: the sustainability and growth of the website. arXiv, a site we are very much modeled after does not charge anything for their preprint service but I would argue their sustainability based on grants is questionable. We believe that authors should buy into this system and we think that the price we will charge is more than fair. Ultimately, if a critical mass is reached in The Winnower and other revenue sources can be generated than we would love to make publishing free but at this moment it is not possible.

From what funds do you think most scientists will pay the $100 fee?

I believe that most academic scientists will pay the $100 fee with grant money. If they do not currently have grant money the fees could theoretically be paid for by departmental funds or even personal funds.

Is The Winnower a for-profit or non-profit enterprise, and are you registered as such?

The Winnower is a for-profit limited liability company.

Is there a preservation plan for the content in case the journal does not continue?

Yes, we will be using the CLOCKSS program.

Is it possible for an author (or journal staff) to withdraw an article?

Yes, it is possible to withdraw an article and it is also possible for us to retract the article if necessary.

Since many scientists do need to play “tenure games”, wouldn’t the Winnower’s lack of indexing, impact factor, etc. serve as a disincentive to submit or review?

Yes, this is certainly an obstacle The Winnower will have to face but it is not only an obstacle for The Winnower rather it is an obstacle for the entire scientific community. We think we need to get away from judging scientists based upon IF or other measures of prestige and we are not alone. The San Francisco Declaration on Research Assessment (SF DORA), which has been signed by nearly 10,000 researchers and publishers in less than a year, calls for new ways to evaluate researchers. As the community moves away from journal-level metrics and into article-level metrics The Winnower should be well positioned to thrive. Indeed, we will utilize many article-level metrics as well as information from reviews themselves.

With most journals, if I submit a paper that is rejected, that information is private and I can re-submit elsewhere. In open review, with a negative review one can publicly lose face as well as lose the possibility of re-submitting the paper. Won’t this be a significant disincentive to submit?

This is precisely what we are trying to change. Currently, scientists can submit a paper numerous times, receive numerous negative reviews and ultimately publish their paper somewhere else after having “passed” peer review. If scientists prefer this system then science is in a dangerous place. By choosing this model, we as scientists are basically saying we prefer nice neat stories that no one will criticize. This is silly though because science, more often than not, is not neat and perfect. The Winnower believes that transparency in publishing is of the utmost importance. Going from a closed anonymous system to an open system will be hard for many scientists but I believe that it is the right thing to do if we care about the truth.

Is there anything else you would like to add?

The Winnower will also feature two sections called “The Grain” and “The Chaff.” The Grain will be short essays by authors of papers that have received 1,000 citations or have passed a specific Altmetric score. In these essays authors will describe the work and the story behind the work (i.e. was it initially rejected, was it funded, where did the idea come from etc.). They will be very similar to the former series Citation Classics run by Dr. Eugene Garfield. Indeed, Dr. Garfield has expressed much enthusiasm for The Winnower to pursue this. In parallel, we will be launching a section called The Chaff that highlights retracted papers. These papers will be written by authors of retracted papers in order to really find out why studies failed or what led to the authors to fabricate data etc. We want to position papers published in The Chaff in a non-accusatory manner so that we may learn from these papers. The Chaff will not be a forum to castigate authors of retracted papers.

Posted in Business Models, Interviews, Open Access Journals, Open Peer Review | Tagged , | Comments Off

OA Week Event: Keynote Address by John Willinsky

John Willinsky, Distinguished Innovator in Residence, gave the keynote address for Open Access Week at Virginia Tech Thursday night in the Graduate Life Center auditorium. “What Is It About the History of Learning that Calls Out for Open Access to Research and Scholarship?” revealed not only historical aspects of scholarship in general but connections between Virginia Tech and his founding of the Public Knowledge Project.

John Willinsky, Open Access Week 2013 keynote at Virginia Tech
John Willinsky, Open Access Week 2013 keynote at Virginia Tech

When Virginia Tech became the first university to require electronic theses and dissertations (ETDs) in 1997, the software for presenting them online was made freely available. Willinsky used the software to post ETDs online (with their authors’ permission, of course), though he discovered that implementation was not as easy as it could have been. This concept of providing freely available software for the purpose of open dissemination of research inspired his founding of the Public Knowledge Project, which provides open source software for producing open access journals, monographs, and conference proceedings. (Today there are 5,000 journals using PKP’s Open Journal Systems, about half of them in the developing world.)

Not only is there a human right to knowledge, any knowledge claim depends on being public. To investigate the nature of knowledge we must address the concept of intellectual property, which is culturally pervasive yet rarely taught or examined in our universities. A university’s relationship to intellectual property is different due to its public or non-profit legal status, and its educational purpose affords it status in the evaluation of the fair use principles of copyright, for example. The tax exempt status of universities recognizes that they produce a different kind of property, particularly in the case of a land-grant institution like Virginia Tech. There is a social contract between society and the university.

Historically, the exchange of real property for another kind of property goes back to the monasteries. Noblemen (and women) gave land (symbolically, a chunk of turf was placed on an altar) so that they, through the monastery, could be closer to God and have a surer path to heaven (and for certainty’s sake, nobles were buried on monastery grounds– here Willinsky noted that Leland Stanford is buried on the grounds of Stanford University). But personal patronage of this kind was not lasting. So today we have democratically elected governments who, on behalf of the public, provide patronage for the advancement of humanity through land grants (the Morrill Act of 1862), tax support, and tax exemption. The knowledge produced in universities is public. The audience was deputized to spread the word.

Thanks to the University Libraries’ Event Capture Service for the video below. [Edit 2/28/14]

Posted in Open Access Week, University Libraries at Virginia Tech | Tagged , | Comments Off

OA Week Event: Faculty and Graduate Student Panels

There were some excellent discussions last night during our Open Access Week faculty and graduate student panels. Our faculty panelists were Dr. Zachary Dresser (Religion and Culture), Dr. Deborah Good (Human Nutrition, Foods, and Exercise), and Dr. Joseph S. Merola (Chemistry).

Faculty Panel (from left, Zach Dresser, Debby Good, Joe Merola)
Faculty Panel (from left, Zach Dresser, Debby Good, Joe Merola)

Both Dr. Good and Dr. Merola have had positive and negative experiences with open access journals. Dr. Good has had positive interactions with PLoS One as an author and peer reviewer, but criticized some hybrid open access journals for asking whether she wanted to take the open option before the paper had been peer reviewed, which could lead to a real or perceived bias due to the fee involved. She has also been asked to become editor of a journal on Beall’s list of predatory journals.

Dr. Merola serves on the editorial board of an open access journal and has had good experiences with open access in general. But he has submitted to another open access journal that would not withdraw a paper or remove him from its editorial board. Dr. Merola also noted that hybrid journals are unlikely to reduce subscription prices with open access takeup. Both Dr. Merola and Dr. Good noted that abstracting and indexing can be a problem with open access journals.

Dr. Dresser primarily writes in the field of history, and noted that humanities journals have shown little movement toward open access. The monograph is the gold standard in these fields, and he referred to the AHA controversy that was the subject of Monday’s ETD Panel. Dr. Good asked why ETDs (electronic theses and dissertations) could not be broken into separate articles as happens in the sciences. Dr. Dresser responded that though it happens on occasion, history is a very traditional field that places value on a story or narrative as a whole (thus the focus on monographs). Interestingly, Dr. Dresser is participating in an open textbook effort in American history.

Our graduate student panelists were Stefanie Georgakis (Ph.D. candidate in Public and International Affairs), Jennifer Lawrence (Ph.D. candidate, ASPECT), and Joshua Nicholson (Ph.D. candidate in Biological Sciences). Stefanie and Jennifer are co-editors of the Public Knowledge Journal, an interdisciplinary open access journal for publishing work by graduate students (at any university). Josh Nicholson is co-founder of The Winnower, an open access journal in the sciences that will be starting in 2014.

Graduate Student Panel (from left, Stefanie Georgakis, Jennifer Lawrence, Josh Nicholson)
Graduate Student Panel (from left, Stefanie Georgakis, Jennifer Lawrence, Josh Nicholson)

Stefanie and Jennifer are struggling with the sustainability of PKJ, though not in the way you might think. While the journal is hosted on campus, the challenge is finding editors, peer reviewers, and submissions from a constantly changing population. PKJ is seeking a formal partnership to ensure its sustainability. Stefanie and Jennifer are also hoping to increase readership and provide for the preservation of journal content. They felt that alternative perspectives are suited for open access, and enabling open discussion of articles on the journal site can combat the inward-looking culture of some traditional journals. PKJ can help graduate students become familiar with the publishing environment, a need also identified by Dr. Good earlier in the evening.

Josh is critical of traditional publishing, and especially of peer review. The Winnower will serve the sciences as a low-cost ($100 article processing charge) open access journal that will also employ open peer review (he noted that the NIH’s PubMedCentral has just begun post-publication review). Articles under review could be revised as a result of review for the first 3 months, then assignment of a DOI would signify publication, though further reviews could be added.

Attracting reviewers could be a problem, and he is open to using a centralized service such as PubPeer. Reviews would be structured, avoiding a problem Stefanie and others brought up of short, insubstantial reviews. Reviewers themselves would be rated (similar to Amazon), with top reviewers perhaps receiving credit toward article publication. While there has been some concern about the potential for racism or sexism in an open environment, the session attendees seemed to agree that transparency was the best option, particularly in fields with single-blind peer review where bias could occur but not be revealed.

I asked whether The Winnower would try to become a member of OASPA (Open Access Scholarly Publishers Association), but Josh replied that the journal’s model would not fit their guidelines (such as having an editorial board) or PubMed’s listing criteria, echoing the abstracting and indexing concern mentioned by Dr. Good and Dr. Merola earlier.

Thanks again to all of our panelists for a great discussion, and to the event organizers, Kiri Goldbeck DeBose and Purdom Lindblad.

Thanks to the University Libraries’ Event Capture Service for the videos. [Edit 2/28/14]

Faculty Panel:

Posted in Open Access Week, Open Peer Review, University Libraries at Virginia Tech | Tagged , , | Comments Off

OA Week Event: A Panel on ETDs and Open Access

Our first event of Open Access Week 2013 provided plenty of interesting discussion yesterday. “ETDs and Open Access” (ETDs are electronic theses and dissertations) was led by Gail McMillan (Director, Center for Digital Research and Scholarship Services, University Libraries), Jordan Hill (ASPECT Ph.D. candidate), and Karen DePauw (Vice President and Dean for Graduate Education).

ETDs and Open Access
ETDs and Open Access (from left, Gail McMillan, Jordan Hill, Karen DePauw)

Gail McMillan began the session with her ETD survey data presentation. One survey queried universities about their ETD policies, and the other surveyed publishers on their willingness to consider revised work based on an openly available dissertation or thesis. About 95% of U.S. institutions have embargoed some ETDs. An interesting finding from the publisher survey was that on the whole, publishers in the humanities and social sciences are actually more willing to consider revised work based on an openly available ETD than science publishers.

Jordan Hill gave a brief overview of his own research (an oral history of memorials for mass murders) and the American Historical Association’s statement on ETD embargoes. Given the unique (and less-revisable) nature of his research, he is understandably concerned that its open availability could affect the chances of publication as a book, and therefore his job/tenure prospects. Rather than the 6-year embargo in the AHA statement, Jordan suggested the possibility of a 3 or 4 year embargo.

Karen DePauw gave an overview of ETDs at Virginia Tech. The university was the first to require them in 1997, prior to her arrival. A one-year embargo is available to students with renewal possible by request. One exception is a 5-year embargo for those in a Master of Fine Arts program (for creative work such as poetry and stories that would not normally be revised for publication). For those doing classified research, the graduate school requires that the research be publicly defended and some part of the research be made available.

These brief presentations by the panelists were followed by an open discussion. I was interested in knowing if any details beyond “case by case” publisher considerations were available (it primarily means a judgement of quality) and the distinction between “published” and “unpublished” (it signifies editorial review). The discussion was pretty wide-ranging, but to me the heart of it is that many early career academics like Jordan support open access but but recognize that it is not rewarded in the criteria for hiring, tenure, and promotion. It is not surprising that despite their approval of openness, they must be conservative and pragmatic in their approach because academic evaluation is not changing as fast as scholarly publishing.

Academic success in the humanities and social sciences is currently dependent on monograph publishing (another criticism of the AHA statement), which is in turn dependent on publishers who evaluate manuscripts based on how many books they think will sell. In particular, publishing an academic monograph is dependent on university presses, who in turn are (mostly) dependent on academic libraries to buy books. But libraries are buying fewer books due to the serials crisis (primarily in STEM fields), which puts stress on university presses, who in turn are less receptive to manuscripts from early career academics. In my view, universities and their libraries need to devote more resources to publishing monographs (such as using Open Monograph Press and implementing an external review process) so that academic work can be evaluated on its quality rather than its saleability. In addition, there’s no reason why this can’t work on the freemium model, with the full text available online but print versions for sale, with royalties going back to the author.

A big thank you is due to Jordan, Karen, and Gail for this thought-provoking session.

Thanks to the University Libraries’ Event Capture Service for the video below [Edit 2/28/14].

Later in the day, Gail and I presented Introduction to Open Access and Copyright. Lots of links inside that presentation for those who want to do some exploration. Interestingly, a graduate student raised similar concerns as Jordan Hill earlier in the day: young scholars are faced with a dilemma between open access and prestige. I pointed out that it’s not always an either-or choice since there are increasingly prestigious open access journals, and self-archiving is a valid option that is often overlooked. But when it comes to getting or keeping a job, it’s hard to fault young academics for publishing behind a paywall when promotion and tenure guidelines reward it.

Posted in Open Access Week, University Libraries at Virginia Tech | Tagged , , | Comments Off

OA Week Updates

Open Access Week is here, and we have a full schedule of events. Today at 11:00 a.m. we have an ETD and Open Access Panel (Torgersen 3080) and at 4:00 p.m. Gail McMillan and I will give an Introduction to Open Access and Copyright (Torgersen 3080).

There are some last-minute updates and additions to our schedule. The webinar with Peter Suber originally scheduled for Wednesday at 2 p.m. has been cancelled. We’ve decided to offer another webinar on Wednesday at 11 a.m., Open Access in Engineering (Library Boardroom, 6th floor).

We’ve been a little late getting the Faculty and Grad Student Panels information out. Our faculty panelists will be Dr. Debby Good (HNFE), Dr. Zach Dresser (Religion & Culture), and Dr. Joe Merola (Chemistry). Our graduate student panelists will be Stefanie Georgakis and Jennifer Lawrence (co-editors of the Public Knowledge Journal) and Joshua Nicholson (co-founder of The Winnower). This was a great session last year and I’m sure it will be again. Please join us Wednesday evening at 5:30 in Torgersen Museum (1100).

We also want to spread the word about an event Friday at 2:30 p.m., “Digital Muscle: Alt-Metrics and Open Access” that will be held in the SCALE-UP classroom (Library 1st floor). This event is part of the Digital Discussions in the Humanities and Social Sciences series, sponsored by the Center for Applied Technologies in the Humanities and the College of Liberal Arts and Human Sciences.

Throughout the week I’ll attempt to blog some summaries of the events, hopefully with some photographs.

Posted in Open Access Week, University Libraries at Virginia Tech | Comments Off