Book Review: Understanding Rights Reversion

Open Access

Nicole Cabrera, Jordyn Ostroff, Brianna Schofield, and the Samuelson Law, Technology and Public Policy Clinic. Understanding Rights Reversion: When, Why & How to Regain Copyright and Make Your Book More Available (Berkeley: Authors Alliance, 2015).

Although I am familiar with copyright and licensing agreements for journal articles, I am less familiar with book publishing agreements. Rights reversion for books was a new concept to me, so the first guide published by the Authors Alliance had my attention right away (the group has since published a second guide, Understanding Open Access). This guide is intended for authors who, for whatever reason, may wish to reclaim rights to their books– rights that they transferred to their publishers when they signed a publishing agreement. It’s the result of “extensive interviews with authors, publishers, and literary agents who shared their perspectives on reverting rights, the author-publisher relationship, and keeping books available in today’s publishing environment.” The guide follows an “if-then” organization, referring readers to specific chapters depending on their situation, though I read it straight through (full disclosure: I’m an Authors Alliance supporter).

Early on, the authors define rights reversion and its availability:

“a right of reversion is a contractual provision that permits an author to regain some or all of the rights in her book from her publisher when the conditions specified in the provision are met… in practice, an author may be able to obtain a reversion of rights even if she has not met the conditions stipulated in her contract or does not have a reversion clause.” (p. 6-7)

This guide is intended for authors with publishing agreements already in place; it is not a guide to negotiating contracts (though it may inspire authors to examine the details of rights reversion clauses in new contracts).

The authors note that rights reversion becomes an issue for academic authors especially when their books fall out of print, sales drop, or their publishers stop marketing their books. In such instances, authors may wish to reclaim their rights (so that they can find another publisher to reissue the book or perhaps deposit the book in an open access online repository) but they find themselves constrained by the terms of their publishing agreements or they may not understand how how to go about reclaiming their rights. With these concerns in mind, the Authors Alliance “created this guide to help authors regain rights from their publishers or otherwise get the permission they need to make their books available in the ways they want.”

An important first step in the process is for authors to learn about different ways that they might increase their books’ availability (for example, electronic, audio, and braille versions as well as translations). Next, the guide helps authors determine if they have transferred to their publishers the rights necessary to make their books available in the ways they want. Older contracts may be ambiguous regarding e-book versions; the guide advises authors on how to negotiate the ambiguity. An additional consideration is that permissions for usage of third-party content may no longer be in effect.

Some examples of reversion clauses are provided in chapter 4, pointing out triggering conditions (such as out of print, sales below a certain threshold, or a term of years), written notice requirements, and timelines. It’s important to understand how the triggering conditions are defined, as well as how to determine whether they have been met, and the authors provide good suggestions for finding this information.

The publisher’s plans for the book should be discovered, and the guide emphasizes reasonable, professional conversations with publishers. The success stories throughout the book are particularly valuable in this respect.

Chapter 6 details how to proceed if a book contract does not include a rights reversion clause:

“Ultimately, whether a publisher decides to revert rights typically depends on the book’s age, sales, revenue, and market size, as well as the publisher’s relationship with the author and the manner in which the author presents his request.” (p. 77)

Before requesting reversion, an author should have a plan in place, review all royalty statements, and discover the publisher’s plans for the book. Being reasonable, flexible, creative, and persistent are the golden rules for negotations with a publisher. Precedents can be persuasive, so inquire with friends and colleagues who are authors. If electronic access is important, be aware that many publishers are actively digitizing their backfiles. In this respect, an author might draw a publisher’s attention to the increasing evidence that open access versions don’t harm sales, and can sometimes increase them as a result of improved discovery.

Understanding Rights Reversion is itself an open access book (licensed CC BY) available online in PDF. If you would prefer a print copy, it’s available in Newman Library, or you can order one ($20) from the Authors Alliance. For more information, see the Authors Alliance rights reversion portal, which includes rights reversion case studies that occurred after the publication of this guide. The Guide to Crafting a Reversion Letter, a companion to the guide containing sample language and templates, has just been released.

Thanks to Peter Potter, Director of Publishing Strategy at the University Libraries, for his feedback on this blog post (contact him if you have questions about book publishing- he has a wealth of experience). Thanks also to the Open Library for the cover image.

Posted in Book Reviews, Copyright, Open Books | Tagged | Leave a comment

Open Data Week in Review

Last week Virginia Tech’s University Libraries hosted its inaugural Open Data Week with six programs on a variety of open data topics. The new format builds on last year’s Open Data Day, which incorporated a hackathon and roundtable discussions. However, the weekend scheduling and a conflict with spring break this year spurred us to create a new event friendlier to academic schedules, with programs throughout the week. Though we hadn’t heard of anyone having an Open Data Week before, we know that Virginia Tech is supposed to “Invent The Future,” so we did. Here’s a summary of the week’s programs.

Open Data Week logo

In our first program of the week, Data Anonymization: Lessons from an Millennium Challenge Corporation Impact Evaluation, Ralph P. Hall (Urban Affairs and Planning) and Eric Vance (Director, LISA- Laboratory for Interdisciplinary Statistical Analysis) described their evaluation of a rural water supply project in Mozambique, which involved household surveys (slides, MCC documentation).

Ralph P. Hall
Ralph P. Hall

The first lesson learned from their evaluation was that everything is linked to the informed consent. The primary takeaway here is the importance of distinguishing between anonymity and confidentiality (see slide 18), the latter of which provides researchers much more flexibility. In addition, there were difficulties with the translation of informed consent into Portuguese and local languages. Other lessons include not underestimating the time required to anonymize data, and designing surveying instruments to minimize anonymization challenges. Unfortunately, the anonymization challenges resulted in an analysis that is not reproducible and data that cannot be shared with a follow-up evaluation team. Data anonymization is a persistent and complex issue that needs to be discussed more frequently, and will certainly be on the agenda of future Open Data Weeks.

Our session on The Freedom of Information Act (FOIA) featured three speakers: Wat Hopkins (Dept. of Communication), Steve Capaldo (University Legal Counsel), and Siddhartha Roy (Flint Water Study team).

Wat Hopkins
Wat Hopkins

Wat Hopkins focused on FOIA in Virginia. FOIA first emerged at the federal level from a 1964 Supreme Court case, and subsequently Virginia was among the first to implement FOIA at the state level in the late 1960s. FOIA laws vary greatly from state to state. In Virginia, FOIA applies to records and meetings. Record requests must receive a response within 5 days and do not need to be in writing (though federal FOIA does require it), and there are around 130 exemptions. Requests must come from a Virginia citizen, or a news organization with circulation or broadcast in some part of the state. For more information, see Virginia’s Freedom of Information Advisory Council, and the Virginia Coalition for Open Government’s FOI Citizens Guide. Ultimately, we can’t be responsible citizens without access to government information.

Steve Capaldo said that since Virginia Tech is a state agency, it is governed by Virginia FOIA. However, the university responds to requests from everyone, not just residents or the media, and will do so within 5 days. There are many exemptions, including some involving research (proprietary or classified research, and grant proposals), personnel records, and records involving security, such as building plans. He emphasized the importance of making requests as specific as possible in order to reduce the time and effort required to respond. And although it’s not required, Capaldo suggested that it can be helpful when requestors explain the context of their request, because sometimes information needs can be met in alternative ways.

Sid Roy, a member of the Flint Water Study team and a graduate student in Civil and Environmental Engineering, described the Flint water crisis which has spanned 18 months and affected 100,000 people. In the process, an EPA employee was silenced and the fallout has included several resignations. The crisis response involved FOIA requests to the city of Flint, the Michigan Department of Environmental Quality, and the EPA. Interestingly, federal FOIA requires an acknowledgement of the request within 2 weeks, but there is no time limit for responding with the requested information. Roy relayed the FOIA advice of the project’s leader, Dr. Marc Edwards: first, be as specific as possible in your request, and second, make requests to a related agency that is not the primary target. For example, the team made FOIA requests to Flint in order to obtain communications and data from EPA. Although we ran out of time to discuss FOIA costs, according to the Flint Water Study GoFundMe page, their FOIA expenses came to $3,180 (while you are on that page, consider a donation!). In short, Roy recommended that FOIA should be in every scientist’s toolbox.

In Library Data Services: Supporting Data-Enabled Teaching and Research @ VT , Andi Ogier gave an overview of the three services offered: education (data management and fluency), curation (capturing context and ensuring reuse), and consulting (embedding informatics methods into research, and teaching about proprietary formats and the need for using open standards). Data Services strives to help researchers have their data achieve impact on the scholarly record, remain useful over time and across disciplines, and have it openly shared for the benefit of humanity. The library helps with data management plans required by funders, and can assign DOIs to datasets. The presentation coincided with the beta release of VTechData, a data repository to help Virginia Tech researchers provide access to and preserve their data.

Show Me the (Open) Data! with librarians Ginny Pannabecker and Andi Ogier was a conversational, exploratory session devoted to identifying open data sets. At the session, they introduced a new guide to finding data, which in addition to listing data sources also includes definitions and information on citing data.

Web Scraping session with Ben Schoenfeld
Web Scraping session with Ben Schoenfeld

Scraping Websites: How to Automate the Collection of Data from the Web was led by Ben Schoenfeld of Code for New River Valley, a Code for America brigade that meets biweekly to work on civic projects. As the slides explain, some programming skills are needed to effectively obtain and clean up data from websites lacking an API, and the basic steps are outlined. The live demonstration, using local restaurant health inspection data, did a good job of showing what is possible. One of our developers in the library, Keith Gilbertson, wrote a blog post about the session and how he applied the skills he learned to a database of state salaries.

Intro to APIs: What’s an API and How Can I Use One? was led by Neal Feierabend, also of Code for NRV (slides follow the scraping slides with slide 17). After an explanation of what APIs (application programming interfaces) are and what types are available, the live demo explored a few APIs, beginning with the Google Maps API. Use of this API is free up to a certain number of page loads, and usage beyond that requires a fee– a model used by many popular APIs. This is one reason Craigslist switched from Google Maps to OpenStreetMap, which as an open mapping tool enables download of the data. Generally, good APIs are those that are well documented. Both Neal and Ben attested to the value of using Stack Overflow and searching the web when encountering coding problems. After the session I found out there are also web services for data extraction like import.io.

Thanks to all of our presenters and attendees, and please let us know if you have suggestions for Open Data Week programs. We hope to do it again next year!

Posted in Open Data, University Libraries at Virginia Tech, VTechData | Tagged , , , , | Comments Off on Open Data Week in Review

Getting to Know Open: A Grad Student’s Experiences at OpenCon 2015 in Brussels

As part of Open Access Week, the University Libraries and the Graduate School offered a travel scholarship to OpenCon 2015, a conference for early career researchers on open access, open data, and open educational resources. From a pool of many strong essay applications, we chose Sreyoshi Bhaduri, a Ph.D. student in engineering education. Sreyoshi attended the conference in Brussels, Belgium on November 13-16, and sent the report below. Be sure to check out the OpenCon 2015 highlights.

Sreyoshi Bhaduri writes:

Towards the beginning of Fall 2015, the graduate listserv announced an opportunity for a graduate student to travel to Brussels for a conference on Open initiatives. As a doctoral student starting my second year at the department of Engineering Education, I had been involved with some open education related citizen science endeavors, but was very new to the world of Open. I had always been fascinated with the idea of Open Access and Open Data, which can be understood as “unrestricted access and unrestricted reuse” of data, however I had never really delved deep into understanding and appreciating Open initiatives. Recognizing this as a perfect opportunity to learn more, I applied, detailing my interest in Open Education and eagerness to learn about the same. Application submitted, I promptly forgot about the scholarship, as course-work and exams and deadlines engulfed and occupied all thoughts and activities. On a busy evening in September as I sat finishing an assignment, I received an email from the University Library informing me that I had been selected for the scholarship. I was to represent Virginia Tech at the OpenCon 2015 in Brussels, scheduled for November 2015. This was the start of my journey to getting to know the Open community, and I slowly become an advocate for all things Open.

The message from the University Libraries notifying me of the travel scholarship was followed by an email from the OpenCon organizers, who warmly welcomed me to join their community. I was directed to a pre-OpenCon webcast, which helped me understand the basics of Open Data, Open Access, and Open Education. I was also asked to join in on a community call, and introduce myself to other attendees. The first thing that I realized about the Open community was that it is comprised of a group of very passionate and dedicated professionals who are determined to build a case for Open initiatives. The next month passed by, as I prepped to attend OpenCon, learning more and more about the community, the cause, and the rationale behind Open. I slowly grew to appreciate and understand the Case for Open, and was eagerly looking forward to exchanging my ideas at the conference.

Roaming Brussels
So we networked and roamed about the streets of Brussels. This is posing with the Manekken Pis.

The day of the flight soon arrived, and after 17 hours of traveling from Roanoke to Charlotte and then to Philadelphia, I finally made it to Brussels. On the flight, I made a few friends who were also traveling to OpenCon. The great thing about OpenCon is that the organizers ensured that most attendees had half a day to themselves before the start of the conference, to familiarize with each other and network. I met a bunch of young professionals and grad students who were doing wonderful work in different disciplines, and learned how some of their work related to Open endeavors.

Days One and Two of the conference comprised various sessions. We had live tweeting (#OpenCon2015) and broadcast of the sessions, so that the larger Open community which wasn’t able to join in physically, was able to contribute in the discussions. Day One had also been the day we had woken up to the news of the France terror attacks. A poignant remark by one of the attendees, who was from Paris, on the importance of Open Education, was that Open resources fights the barriers of access and divide, which in turn seeks to eradicate disillusionment and hence fights terror. This remark truly spoke to me, and I was inspired by the commitment and grit shown by the attendees, especially those from France.

OpenCon, Hotel Thon
Sneak peek at the sessions at Hotel Thon conference center

On Day Two we had the Unconference sessions. I was totally new to the idea of Un-conferencing, but found it a very useful brainstorming and networking session. I recommend organizers of seminars and educational events to have similar sessions at all conferences. During this session, we grouped with people with similar interests and discussed ideas for implementation in “real-world” scenarios. For instance, in the group I un-conferenced with, we discussed the role of Open in academia. We discussed how difficult it is to convince faculty who are probably tenure track, at R-1 institutions, to publish in Open journals, since a large part of their tenure process depends on publication impact. Our conversations then drifted to the subject of impact factors, and how a single number could not truly capture the essence of a research publication. The second evening ended early, with a reception dinner, and more networking.

Day Three was the most anticipated day. This was Advocacy Day. Basically, we formed teams of 8 individuals and we met with Members of the European Union, and discussed Open initiatives. This was by far the best experience I had at the conference. It was very interesting to meet with and learn from members of the EU, and discuss the challenges of implementing Open policies.

EU open advocacy
All dressed up for Advocacy Day

Following the meeting with the MPs, we attended the last event for OpenCon, the final reception dinner, in which we had the opportunity to interact with the founder of Wikipedia: Jimmy Wales. Wales talked to the gathering about the importance of Open Education, and of inspiring early career professionals to take up causes pertaining to Open initiatives after the conference.

The conference was only three days, short but packed with information and activities. I had read up about the conference, before attending it, and had anticipated meeting talented and passionate individuals; but the clockwork precision of the management, the energy of the attendees, and the warmth of the community; truly inspired me to learn more and contribute more to the cause. I would definitely recommend learning about, participating in, and potentially even attending Open community events, for all students and early professionals. I would further urge readers to contribute to your immediate academic communities in Open endeavors. The University Libraries at Virginia Tech, for instance, does a fantastic job of making available resources for graduate students, researchers, and faculty to learn about and publish in Open channels. Over time, I have come to view Open as a part of my identity as a graduate student. I believe each one of us should commit to making our research publications easily accessible by everyone. I believe I was truly lucky to have been selected for OpenCon 2015, I learnt so many new things, met some wonderful spirited individuals, am associated with some great work, and hope to continue to advocate for Open in the future.

OpenCon t-shirt
OpenCon 2015 Memories
Posted in Open Access, Open Educational Resources, University Libraries at Virginia Tech | Tagged | Comments Off on Getting to Know Open: A Grad Student’s Experiences at OpenCon 2015 in Brussels

Celebrate Fair Use Week with the University Libraries at VT – Feb. 22-26!

The University Libraries are excited to announce our first annual Fair Use Week celebration! Starting on Monday February 22nd, Fair Use Week is an event to “celebrate the important doctrines of fair use in the United States and fair dealing in Canada and other jurisdictions” and promoted by the Association for Research Libraries.

Fair Use Week logo

In addition to the Fair Use Week events, the University Libraries will have an interactive exhibit on the 2nd floor of Newman Library (near the Alumni Mall entrance) from Monday February 22 through Friday March 4. Please join us for one or more of the events below!

  • Monday, 2/22, 4:30-5pm, Newman Library, 2nd floor
    Fair Use Week Exhibit Opening – enjoy some light refreshments while exploring the interactive exhibit.
  • Tuesday, 2/23, 9:30-10:45am, online*
    Workshop: “Is it a Fair Use? A Hands-On Discussion”
    NLI Credit available.
    *Contact Ginny Pannabecker at vpannabe@vt.edu for online meeting information.
  • Tuesday, 2/23, 11:00am-Noon, Newman Library, Multipurpose Room (first floor)
    Workshop: “The New International Movement to Standardize Rights Statements – And How We’re Participating”
    NLI Credit available.
  • Wednesday, 2/24, 10:00-11:00amNewman Library, Multipurpose Room (first floor)
    Discussion: “Behind the Scenes of the Fair Use Week Exhibit: How We Made Our Copyright Decisions”
    NLI Credit available.
  • Wednesday, 2/24, 1:25-2:15pmNewman Library, Multipurpose Room (first floor)
    Workshop: “Is it a Fair Use? A Hands-On Discussion”
    NLI Credit available.

So, what is “fair use” and why do we think it’s important enough to celebrate it for a whole week? 

Fair Use is a four-factor exemption of U.S. Copyright Law 17 U.S. Code § 107 which allows anyone to:

  • Copy
  • Re-distribute
  • Perform
  • Electronically transmit
  • Publicly display
  • Create new versions of others’ copyrighted works

…without permission.*

*When the potential use is deemed to be “fair” rather than “infringing.”  Only a court can decide what is truly “fair use.” However, U.S. law allows anyone to conduct a well-informed fair use analysis in good faith to determine if their proposed use of copyrighted material is more fair or more infringing.

For an example of Fair Use in action and an entertaining video explaining some foundational U.S. Copyright and Fair Use information, take a look at Professor Eric Faden’s “A Fair(y) Use Tale.” The version embedded below was re-uploaded to YouTube (under compliance with the video’s CC BY NC-SA 3.0 license) in order to add transcribed subtitles and captioning.

Thank you for taking a moment to find out more about Fair Use, and we hope to see you at one or more of the University Libraries events!

Thanks to the University Libraries’ 2016 Fair Use Week team: Virginia (Ginny) Pannabecker, Anita Walz, Scott Fralin, Robert Sebek, and Keith Gilbertson!

Posted in Fair Use, University Libraries at Virginia Tech | Tagged , | Comments Off on Celebrate Fair Use Week with the University Libraries at VT – Feb. 22-26!

A Recap of Open Access Week 2015 at Virginia Tech

Virginia Tech’s fourth Open Access Week took place October 19-23 with five events, featuring the annual faculty/graduate student panel discussion and a keynote address by Victoria Stodden.

As always, the panel discussion was one of the most interesting events of the week. Sascha Engel, PhD candidate in ASPECT and editor of the graduate journal SPECTRA, spoke about the benefits of moving to library hosting for the journal. Use of the open source OJS software helped automate communication with authors, and the journal was able to retain its domain name. The PDF is still important in the humanities where page numbers are needed for citing. As a graduate journal, SPECTRA allows authors to retain copyright so that articles can be further developed and published elsewhere. Alison Burke, a PhD candidate in Biomedical Sciences, spoke about the difficulty of publishing in fee-based open access journals while in a funding gap between grants. The library’s open access fund bridged that gap and helped her publish in PLOS ONE. She noted that open access articles result in more views and are easier to find. Scott King, Professor in the Department of Geosciences, is an executive editor at the open access journal GeoResJ, a broad, multidisciplinary journal, but notes that in his specialty, deep earth research, open access is not very influential because most researchers are at institutions with subscriptions. In contrast, publishing open access is crucial to Jeremy Ernst, associate professor of Integrative STEM Education, because a large part of his audience is public educators who would not otherwise have access to his research. He noted much higher citation counts in open access journals. Ernst was the first to take advantage of the open access fund when it began. Carola Haas, Professor in the Department of Fish and Wildlife Conservation, has used the open access fund for publication of a hybrid open access article, and said that open access is important for her audience, which includes land managers, independent contractors, and conservationists in developing countries, many of whom lack access to expensive journals. Titilola Obilade, former adjunct faculty in the School of Education, has used the open access fund multiple times to ensure that all have access to her research.

Thanks to the University Libraries’ Event Capture Service for the video below.

A new event to Open Access Week, “Data and Digitization in the Liberal Arts and Human Sciences” was organized by Tom Ewing, Associate Dean for Graduate Studies, Research, and Diversity in the College of Liberal Arts and Human Sciences and a professor in the Department of History. The session featured panelists from Advanced Research Computing (ARC) and the University Libraries. Terry Herdman, Nicholas Polys, and Vijay Agarwala spoke about ARC’s services for researchers, such as consulting, training, support, and collaboration, and highlighted the visualization lab in Torgersen Hall, the Visionarium. From the Libraries, Nathan Hall introduced the digitization services available, and Amanda French spoke about the library’s interest in facilitating interdisciplinary research, and perhaps providing tools for learning text and data mining (TDM).

Mid-week, NLI sessions were offered on our open access fund (apply here) by Gail McMillan and trends in scholarly publishing, a discussion I led. Both are offered regularly, so check the NLI schedule.

Dr. Victoria Stodden
Dr. Victoria Stodden

The highlight of the week was the keynote address by Dr. Victoria Stodden, an associate professor in the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign. “Scholarly Communication in the Era of Big Data and Big Computation” (slides) focused on what reproducibility means for computation, and also addressed scientific norms and access. She proposed that reproducibility has three facets: empirical, computational, and statistical. While we know that error is ubiquitous in science, computation is new enough that standards are not well established. Computation itself is a research object; an accompanying journal article is simply advertising for it. Interestingly, Stodden highlighted the Mertonian norms of science, just as Brian Nosek did in last year’s keynote address. But while Nosek contrasted Mertonian norms with academic incentives, Stodden put them in an intellectual property framework. In this context, open licenses are aligned with scientific norms, whereas intellectual property protections (e.g., copyright) are not. While a number of platforms have been developed for dissemination and reproducibility of computation, these have been independent efforts, and would achieve greater impact with a coordinated response. Ultimately, it is access that is needed most:

Conclusion: the primary unifying concept in formulating an appropriate norm-based response to changes in technology is access. At present, access to “items” underlying computational results is limited.

Many thanks to Dr. Stodden and all those who came to the keynote. Thanks also to the keynote sponsors, which in addition to the University Libraries include Computational Modeling and Data Analytics, the Department of Computer Science, the Department of Statistics, LISA, and the Virginia Bioinformatics Institute.

Thanks to the University Libraries’ Event Capture Service for the video below.

Posted in Open Access Week, University Libraries at Virginia Tech | Tagged | Comments Off on A Recap of Open Access Week 2015 at Virginia Tech

Grad Students: Travel to Brussels to Learn About Openness!

Graduate students at Virginia Tech are encouraged to apply for a travel scholarship to OpenCon 2015, the student and early career researcher conference on Open Access, Open Education, and Open Data to be held on November 14-16, 2015 in Brussels, Belgium.

OpenCon 2015

One scholarship will be awarded to a Virginia Tech graduate student, which will cover travel expenses, lodging, and some meals. Applicants must use the following URL to apply by Monday, September 21:

http://opencon2015.org/virginia_tech

To find out more about the conference, see the Participant FAQ and the conference program. This international conference offers an unparalleled opportunity to learn about the growing culture of openness in academia and how to become a participant in it. The travel scholarship is sponsored by the Graduate School and the University Libraries. For questions, please contact Philip Young, pyoung1@vt.edu (please note that the general application process for the conference closed earlier this summer, and related details in the participant FAQ will not apply).

Last year two graduate students received scholarships to the conference (which was in Washington, D.C.), and you can read about their experiences.

This year’s winner will be selected by the Graduate School and the University Libraries based on answers to the application questions, and announced on September 24. Please share this opportunity with all VT graduate students, and best of luck to the applicants!

Posted in Open Access, Open Data, Open Educational Resources, University Libraries at Virginia Tech, Virginia Tech | Tagged , | Comments Off on Grad Students: Travel to Brussels to Learn About Openness!

Book Review: MOOCs

MOOCs

Jonathan Haber, MOOCS. The MIT Press Essential Knowledge Series. Cambridge, Mass. : The MIT Press, 2014.

I read Jonathan Haber’s book MOOCs a few months ago, and am glad to finally offer some thoughts. Despite a remarkable cooling of interest in MOOCs, there are still plenty of reasons to consider what role they might play in higher education. Haber, perhaps best known for his year-long MOOC experiment to obtain the equivalent of a bachelor’s degree, here offers a readable and balanced account of the MOOC environment.

Haber begins by outlining the history of MOOCs (massive open online courses), pointing out that “open” was an earlier driver than “massive” with MIT’s OpenCourseWare initiative for class materials (begun in 2002), though many of those courses lack video lectures. The first real MOOC came along in 2008, “Connectivism and Connective Knowledge,” taught by Stephen Downes and George Siemens. In the connectivist model, class size became an asset, not a liability (p. 39):

For the bigger the connectivist “class,” the greater the potential for the quantity and variety of nodal connections that define success for networked learning.

However, as MOOCs evolved, most were not designed around a specific pedagogical method, and Haber notes how different the learning experience is between connectivist and non-connectivist MOOCs. A tool for student connection common to both models is the discussion board, though they can be overwhelming to students, resulting in low participation rates. Scheduled vs. on-demand MOOCs have different types of discussion, with the latter focusing more on test and assignment support rather than on general course topics. Haber provides an interesting analysis of other ways that scheduled and on-demand MOOCs differ (p. 78-79).

In his chapter Issues and Controversies (p. 89-131), Haber first focuses on the low completion rates of MOOCs (a problem shared by a MOOC I wrote about last year). He argues that MOOC sign-ups are due mostly to curiosity rather than commitment. Still, though completion rates may be low, the raw completion numbers are still very large, and Haber quotes a professor who remarks that the number of students completing his MOOC is equal to all of the students he has taught in his career up to that point. Problems such as course demand level, cheating, plagiarism, and student identity are being addressed in a variety of ways, such as Coursera’s signature track identity verification.

On the positive side, there’s evidence that the shorter lectures used in most MOOCs are more effective, and that the ability to change speed, pause, and repeat lectures has a pedagogical impact. The interaction of older and younger learners common in MOOCs is rare in traditional education. The modularity of MOOCs is increasingly being utilized, and MOOCs have been successful in blended learning, rather than as a substitute for the classroom. Indeed, edX material is used at MIT to flip courses, and there’s extensive discussion about how MOOCs can fit into the flipped classroom model (p. 156-161). On the whole, MOOCs have raised the bar for online education in terms of production value, creativity, and risk-taking.

In these days of corporate open-washing, anything claiming to be open bears further examination. Haber notes that “open” tends to be interpreted by the public as “free,” despite the need in some MOOCs to purchase materials in order for the student to benefit the most from the course. Haber offers solid discussions of intellectual property (beginning on p. 118) and openness (beginning p. 122). A central problem has been that academic libraries license content for their campuses which cannot legally be shared with large numbers of unaffiliated students. Additionally, educational use is not automatically fair use (a common misunderstanding). Options for using external material include a full fair use analysis, obtaining permission (often at a cost), linking to content, and/or using openly licensed resources. And of course, most MOOCs are not openly licensed themselves. However, edX seems to be upholding open values and thriving, according to a recent article.

Haber also covers the difficulties involved in getting credit for MOOC courses from institutions of higher learning through programs like high school Advanced Placement (AP), the College Board’s College Level Examination Program (CLEP), and the American Council of Education’s (ACE) CREDIT program, which accredits courses for college-level equivalency. Publicity and incentives for the one-off alternative credit are not sufficient, which may explain why there were no sign-ups for either an ACE transcript for a MOOC or a Udacity-Colorado State course in computer programming (p. 106). Yet the future of MOOCs for younger learners, Haber says, may be alongside these existing programs.

This book introduced me to Straighterline and the SPOC (small private online course- for example, CopyrightX, which I hope to take), but the MOOC environment is apparently so fast-moving that some interesting initiatives are now defunct, such as MOOCs Forum, MOOC Campus and mooc.org. Haber perhaps overstates the altruistic purposes of MOOCs (p. 187), and his statements about the cost challenge of MOOCs to residential education may be premature.

MOOCs is part of the MIT Press Essential Knowledge series, which notably includes Peter Suber’s Open Access and John Palfrey’s Intellectual Property Strategy (which I reviewed previously). In addition to an index and notes, it includes a glossary, additional resources, and a list of MOOC providers. It’s an enjoyable and informative read, though not one inspiring certainty, perhaps best communicated by one last heavily-qualified quote (p. 194):

But if MOOCs continue to embrace-or even expand on- the culture of experimentation and innovation that has already set them apart from nearly all other adventures in technology-based learning, if they continue to offer high-quality free teaching to the world while also serving as the laboratory where educational innovation thrives, then whatever MOOCs are today or whatever they evolve into, they are likely to leave an important mark on whatever ends up being called higher education in the future.

Posted in Book Reviews, Open Educational Resources | Tagged | Comments Off on Book Review: MOOCs

A New Issue of Virginia Libraries on “Exploring Openness”

Virginia Libraries cover
Virginia Libraries (cover design by Brian Craig)

Virginia Libraries, the journal of the Virginia Library Association, has recently undergone some significant changes. Formerly a non-peer reviewed quarterly, it’s now an annual peer-reviewed volume, with a first issue on the theme “Exploring Openness” (full disclosure: I was a peer reviewer for two articles submitted for this issue, and fellow blogger Anita Walz authored an article on OER). A broad range of open-related topics is addressed, but for the sake of brevity I’d like to highlight two standout articles (please do check out the full table of contents).

The hype over MOOCs may be past, but I think dismissing them completely is premature. In Just How Open? Evaluating the “Openness” of Course Materials in Massive Open Online Courses (PDF), Gene R. Springs (The Ohio State University) examines the status of texts assigned in 95 courses offered by Coursera or edX. Of 49 courses listing a textbook, 20 of these were freely available; of 44 courses listing or linking to non-textbook readings, 31 linked to or embedded only freely available resources. It’s great to have this quantitative data on MOOC openness. There’s much more data in the article, which is a welcome contribution to the MOOC literature.

The second standout article in this issue is Contextualizing Copyright: Fostering Students’ Understanding of Their Rights and Responsibilities as Content Creators (PDF) by Molly Keener (Wake Forest University). It’s important that students know about the bundle of rights known as copyright both as consumers and creators in the knowledge ecosystem. Keener’s information literacy instruction employs scenarios relevant to students (included as an appendix) and incorporates copyright-related aspects of popular culture. Clearly such instruction is needed:

Most students are unaware that they own copyrights, or that simply because a photograph is free to access online does not mean that it is free to be reused.

Every university should have this kind of instruction to help students understand the environment in which information is created and used. Keener’s article is highly recommended.

While there’s almost everything to like about the new direction Virginia Libraries is taking, one oversight by the editorial board should be pointed out. At the bottom of the table of contents (PDF) the journal states the following:

The Virginia Library Association firmly espouses open access principles and believes that authors should retain full copyrights of their work. The agreement between Virginia Libraries and the author is license to publish. The author retains copyright and thus is free to post the article on an institutional or personal web page subsequent to publication in Virginia Libraries. All material in the journal may be photocopied for the noncommercial purpose of educational advancement.

It’s great that authors can retain copyright, but a journal cannot “firmly espouse open access principles” without openly licensing the content. Peter Suber succinctly defined OA as “digital, online, free of charge, and free of most copyright and licensing restrictions.” This means content should not just be available but also openly licensed (many get the first part but not the second). Leading OA journals have published thousands of articles under a Creative Commons Attribution (CC BY) license, which gives re-use permissions in advance. It’s also the license for this blog. Librarians should be more aware than most about copyright restrictions for sharing research, and Anita’s article in this issue gives a full list of Creative Commons licenses. Hopefully the editorial board will make Virginia Libraries fully OA by licensing future issues CC BY.

The co-editors of this special issue, Candice Benjes Small and Rebecca K. Miller, deserve praise for its quality and for helping the journal begin a new direction. Virginia Libraries is now seeking a volunteer to be the new editor (see the position description). Interested applicants should send a cover letter and résumé to Suzy Szasz Palmer at palmerss@longwood.edu by July 24, 2015.

Posted in Copyright, Open Access Journals, Open Licensing | Tagged , | Comments Off on A New Issue of Virginia Libraries on “Exploring Openness”

A Response to Jeffrey Beall’s Critique of Open Access

I recently became a member of the American Association of University Professors (AAUP) and today was dismayed to see Jeffrey Beall’s article What the Open-Access Movement Doesn’t Want You to Know in the latest issue of its journal, Academe. (I joined because as a member of Virginia Tech’s Faculty Senate, AAUP has been helpful in advising us on increasing the role of Faculty Senate in university governance.)

For those who may not know, Jeffrey Beall is a librarian at the University of Colorado-Denver, and through his blog Scholarly Open Access exposes academic “predatory publishers” (pay-to-publish scams that perform little to no peer review) and other sketchy doings in academic publishing. While this is a tremendous service to the scholarly community, he has unfairly blamed these problems on open access as a whole. It became apparent just how off the rails Beall had gone when he published The Open-Access Movement is Not Really about Open Access in the journal TripleC (in the non-peer reviewed section; also see Michael Eisen’s response, Beall’s Litter). If you enjoy right-wing nuttiness (yes, George Soros is involved) you really should read it.

Beall’s critiques of open access are not always as factual as they could be, so as an open access advocate I am concerned when his polemics are presented to an academic audience that may not know all the facts. So below is my response to selections from his article:

The open-access movement has been around for more than a dozen years

Actually it has been around longer than that- Stevan Harnad made his “subversive proposal” in 1994 on a Virginia Tech email list.

The open-access movement is a coalition that aims to bring down the traditional scholarly publishing industry and replace it with voluntarism and server space subsidized by academic libraries and other nonprofits. It is concerned more with the destruction of existing institutions than with the construction of new and better ones.

This is quite an evidence-free paragraph. Where is the coalition, and where is the goal stated of bringing down the traditional scholarly publishing industry? Who has said all we need is voluntarism and server space? No one I know of.

The movement uses argumentum ad populum, stating only the advantages of providing free access to research and failing to point out the drawbacks (predatory publishers, fees charged to authors, and low-quality articles).

There is frequent discussion of these problems. Credit Beall for bringing attention to predatory publishers, but it’s less of a problem than he makes it out to be (and one seemingly devoid of data- Beall would strengthen his claims if he could document the number of authors victimized and/or the amount of money lost). A majority of open access journals do not charge authors, and those that do usually have waivers. There are also plenty of high-quality open access journals like PLOS Biology, generally considered tops in its field. And we know that “low-quality articles” could never appear in a subscription journal.

It’s hard to argue against “free”—and free access is the chief selling point of open-access publishing…

Actually open access is not just about “free.” OA means free as in cost (to the reader) but also free as in freedom (open licensing). As a librarian, Beall should know the barriers that copyright presents in the use of scholarship by libraries and researchers. OA advocates know that scholarly publishing does cost something, and are actively working on alternatives to the broken subscription model.

In the so-called gold open-access model, authors are charged a fee, called the “article processing charge,” upon acceptance of a manuscript.

This is simply wrong. Gold open access describes OA journals that publish peer-reviewed articles. A majority of them do not have an article processing charge (APC). APCs are just one model of providing open access. It’s true that predatory publishing is based on this model as a money-making scam. This is why authors need to know something about the journals where they submit articles.

Some publishers and journals do not charge fees to researchers and still make their content freely accessible and free to read. These publishers practice platinum open access, which is free to the authors and free to the readers.

“Platinum” open access must be Beall’s invention, because no one else uses this term. Open access journals (“gold” open access) includes journals with fees and those without fees.

A third variety of open-access publishing, often labeled as green open access, is based in academic libraries…

Lots of libraries do have repositories, but it’s not accurate to say that all (or even most) archiving is based there. There are plenty of disciplinary repositories, and for-profit ones like Academia.edu.

…the green open-access movement is seeking to convert these repositories into scholarly publishing operations. The long-term goal of green open access is to accustom authors to uploading postprints to repositories in the hope that one day authors will skip scholarly publishers altogether.

Maybe some think this, but I wouldn’t call it widespread. Most scholarly publishing in libraries (that is, journal or monograph publishing) is a separate operation from article archiving. And no one thinks peer review can be skipped, which seems to be an implication here.

Despite sometimes onerous mandates, however, many authors are reluctant to submit their postprints to repositories.

This is unfortunately true, but Beall doesn’t mention that many of the “onerous mandates” were passed unanimously by the same faculty members who must observe them, because they became convinced of the benefits of open access to research.

Moreover, the green open-access model mostly eliminates all the value added that scholarly publishers provide, such as copyediting and long-term digital preservation.

Most OA advocates agree that scholarly publishers provide value- after all, some of them publish OA journals. But the choice of examples is odd. I’m one of many authors who has had the experience of copy editing actually introducing errors into my carefully composed article. And in some cases repositories are a better bet for long-term digital preservation than journals, which can stop publishing without a preservation plan. In short, the value added that is claimed by many publishers is coming under question, and rightfully so in my view.

The low quality of the work often published under the gold and green open-access models provides startling evidence of the value of high-quality scholarly publishing.

This makes little sense. An archived (“green”) article can be of the highest quality and may have been published in one of the prestigious journals Beall venerates. And again, there are many well regarded open access journals.

When authors become the customers in scholarly communication, those with the least funds are effectively prevented from participating; there is a bias against the underfunded.

Many OA advocates have identified the same problem with APCs, especially for authors from the developing world. But many of these journals have waivers, most OA journals don’t have charges, and new models are being developed that subsidize journals without charge to either author or reader. It’s not accurate to portray fee-based publishing as the only open access model.

Subscription journals have never discriminated on the basis of an author’s ability to pay an article-processing charge.

No, they just discriminate against libraries.

Gold open access devalues the role of the consumer in scholarly research… Open access is making readers secondary players in the scholarly communication process.

This is just laughable. Yes, we should feel sorry for all those readers who can freely access all the peer-reviewed research that their tax dollars likely paid for.

In the next section of his article, “Questioning Peer Review and Impact Factors” Beall mostly critiques the doings of predatory publishers, which no one really disputes. But in criticizing predatory publishers (again unfairly extending his critique to all open access publishing) he gives subscription publishing a free pass. If you don’t think bad information has appeared in prestigious peer-reviewed subscription journals, try searching “autism and immunization” or “arsenic life.” Beall’s reverence for the journal impact factor isn’t supported by any facts (see my post Removing the Journal Impact Factor from Faculty Evaluation). So predatory publishers using fake journal impact factors shouldn’t be a concern- it’s a bogus metric to start with. Moreover, Beall fails to acknowledge that open peer review, in whatever form, would largely solve the problem of predatory publishing. If a journal claims to do peer review, then let’s see it!

If you’re an author from a Western country, the novelty and significance of your research findings are secondary to your ability to pay an article-processing charge and get your article in print.

Again- waivers are available and the majority of OA journals don’t have fees. It’s interesting that Beall uses words like “novelty” and “significance” here, as if unaware of real problems in peer review caused by these assessments (which are not attributable to predatory publishing).

Open-access advocates like to invoke the supposed lack of access to research in underdeveloped countries. But these same advocates fail to mention that numerous programs exist that provide free access to research, such as Research4Life and the World Health Organization’s Health Internetwork Access to Research Initiative. Open access actually silences researchers in developing and middle-income countries, who often cannot afford the author fees required to publish in gold open-access journals.

Once again, OA is not all about fees. It’s also odd that so many people from the developing world are huge open access advocates. Beall fails to mention that the large publishing companies have a lot of control over which countries get access and which do not. If they decide that India, for example, can afford to pay, then they don’t provide access. Wider open access would make these programs unnecessary. The main thing silencing researchers in developing countries is basic access to research, which inhibits their own research efforts.

…the top open-access journals will be the ones that are able to command the highest article-processing charges from authors. The more prestigious the journal, the more you’ll have to pay.

There may be some truth to this, and it’s a concern I share. However, APCs may be subject to price competition (an odd omission from someone who is so market-oriented). Beall has identified the biggest problem to my mind, which is journal prestige. Prestige means that mostly we are paying for lots of articles to be rejected, which are then published elsewhere. Academia needs to determine whether continuing to do this is very smart, and whether other sources of research quality or impact might be available.

The era of merit in scholarly publishing is ending; the era of money has begun.

Another laugher. Beall must be unaware of his own library’s collections budget, or the 30-40% annual profit made by Elsevier, Wiley, Informa, etc. If he is concerned about merit (and especially predatory publishing), he ought to be advocating for some form of open peer review.

Most open-access journals compel authors to sign away intellectual property rights upon publication, requiring that their content be released under the terms of a very loose Creative Commons license.

As opposed to subscription journals, most of which which compel authors to transfer their copyright? Many open access journals allow authors to retain copyright.

Under this license, others can republish your work—even for profit—without asking for permission. They can create translations and adaptations, and they can reprint your work wherever they want, including in places that might offend you.

Wouldn’t it be awful to have your work translated or reprinted? I mean, no one actually wants to disseminate their work, do they? This is mostly scare-mongering about things that might happen .001% of the time. And because of the ever-so-slight chance someone might make money from your work, or it might be posted to a site you don’t agree with, we shouldn’t share research? This blog is licensed CC BY, and I don’t care if either of those things happen. What’s not logical is for these largely unfounded fears to lead us back to paywalls and all-rights-reserved copyright.

Scholarly open-access publishing has made many tens of thousands of scholarly articles freely available, but more information is not necessarily better information.

I don’t think anyone has ever claimed this. Even if there were only subscription journals, there would be new journals and more articles published.

Predatory journals threaten to bring down the whole cumulative system of scholarly communication…

I think there may be some exaggeration here.

In the long term, the open-access movement will be seen as an ephemeral social cause that tried and failed to topple an industry.

Open access is not looking very ephemeral at the moment. The “industry” seems to be trying to find ways to accommodate it so they don’t go out of business. Open access advocates are not necessarily against the “industry,” just the broken subscription/paywall model they use. Indeed, traditional publishers like Elsevier and Wiley are profiting handsomely from hybrid open access, and starting OA journals or converting existing ones to open access.

Be wary of predatory publishers…

Finally, something we can agree on!

Posted in Open Access, Open Access Journals | Tagged | 20 Comments

Book Review: Issues in Open Research Data

Issues in Open Research Data

Moore, Samuel A. (ed.), Issues in Open Research Data (London: Ubiquity Press, 2014).

Bringing together contributed chapters on a wide variety of topics, Issues in Open Research Data is a highly informative volume of great current interest. It’s also an open access book, available to read or download online and released under a CC BY license. Three of the nine chapters have been previously published, but benefit from inclusion here. In the interest of full disclosure, I’m listed as a book supporter (through unglue.it) in the initial pages.

In his Editor’s Introduction, Samuel A. Moore introduces the Panton Principles for data sharing, inspired by the idea that “sharing data is simply better for science.” Moore believes each principle builds on the previous one:

  1. When publishing data, make an explicit and robust statement of your wishes.
  2. Use a recognized waiver or license that is appropriate for data.
  3. If you want your data to be effectively used and added to by others, it should be open as defined by the Open Knowledge/Data Definition— in particular, non-commercial and other restrictive clauses should not be used.
  4. Explicit dedication of data underlying published science into the public domain via PDDL or CC0 is strongly recommended and ensures compliance with both the Science Commons Protocol for Implementing Open Access Data and the Open Knowledge/Data Definition.

In “Open Content Mining” Peter Murray-Rust, Jennifer C. Molloy and Diane Cabell make a number of important points regarding text and data mining (TDM). Both publisher restrictions and law (recently liberalized in the UK) can block TDM. And publisher contracts with libraries, often made under non-disclosure agreements, can override copyright and database rights. This chapter also includes a useful table of the TDM restrictions of major journal publishers. (Those interested in exploring further may want to check out ContentMine.)

“Data sharing in a humanitarian organization: the experience of Médecins Sans Frontières” by Unni Karunakara covers the development of MSF’s data sharing policy, adopted in 2012 (its research repository was established in 2008). MSF’s overriding imperative was to ensure that patients were not harmed due to political or ethnic strife.

Sarah Callaghan makes a number of interesting points in her chapter “Open Data in the Earth and Climate Sciences.” Because much of earth science data is observational, it is not reproducible. “Climategate,” the exposure of researcher emails in 2009, has helped drive the field toward openness. However, there remain several barriers. The highly competitive research environment causes researchers to hoard data, though funder policies on open data are changing this. Where data has commercial value, non-disclosure agreements can come into play. Callaghan notes the paradox that putting restrictions on collaborative spaces makes sharing more likely (the Open Science Framework is a good example). She also shares a case in which an article based on open data was published three years before the researchers who produced the data published. It is becoming likely that funders will increasingly monitor data use and require acknowledgement of data sources if used in a publication. Data papers (short articles describing a dataset and the details of collection, processing, and software) may encourage open data. Researchers are more likely to deposit data if given credit through a data journal. However, data journals need to certify data hosts and provide guidance on how to peer review a dataset.

In “Open Minded Psychology” Wouter van den Bos, Mirjam A. Jenny, and Dirk U. Wulff share a discouraging statistic: 73% of corresponding authors failed to share data from published papers on request. A significant barrier is that providing data means substantial work. Usability can be enhanced by avoiding proprietary software and following standards for structuring data sets (an example of the latter is OpenfMRI). The authors discuss privacy issues as well, which in the case of fMRI includes a 3D image of the participant’s face. The value of open data is that data sets can be combined, used to address new questions, analyzed with novel statistical methods, or used as an independent replication data set. The authors conclude:

Open science is simply more efficient science; it will speed up discovery and our understanding of the world.

Ross Mounce’s chapter “Open Data and Palaeontology” is interesting for its examination of specific data portals such as the Paleobiology Database, focusing in particular on the licensing of each. He advocates open licenses such as the CC0 license, and argues against author choice in licensing, pointing out that it creates complexity and results in data sharing compatibility problems. And even though articles with data are cited more often, Mounce points out that traditionally indexing occurs only for the main paper, not supplementary files where data usually resides.

Probably the most thought-provoking yet least data-focused chapter is “The Need to Humanize Open Science” by Eric Kansa of Open Context, an open data publishing venue for archaeology and related fields. Starting with open data but mostly about the interaction of neoliberal policies and openness, the chapter deserves a more extensive analysis than I can give here, but those interested in the context against which openness struggles may want to read his blog post on the subject, in addition to this chapter.

Other chapters cover the role of open data in health care, drug discovery, and economics. Common themes include:

  • encouraging the adoption of open data practices and the need for incentives
  • the importance of licensing data as openly as possible
  • the challenges of anonymization of personal data
  • an emphasis on the usability of open data

As someone without a strong background in data (open or not), I learned a great deal from this book, and highly recommend it as an introduction to a range of open data issues.

Posted in Book Reviews, Open Data | Tagged | Comments Off on Book Review: Issues in Open Research Data