University Libraries Host Open Education Week 2015

Open Education Week is an annual event to raise awareness of free and open educational resources, and 2015 marked the second celebration of Open Education Week at Virginia Tech’s University Libraries. How are open educational resources (OER) defined?

OER are teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and re-purposing by others. Open educational resources include full courses, course materials, modules, textbooks, streaming videos, tests, software, and any other tools, materials, or techniques used to support access to knowledge.

Open Education Week at VT

Virginia Tech students, staff, faculty, and visitors from other institutions gathered for six events the week of February 23-27 to explore, discuss, and learn about open education. Our events focused on raising awareness of Open Educational Resources as:

  • one way to address student debt and educational affordability issues.
  • a way to address copyright limitations in education.
  • a set of tools to enhance faculty creativity, flexibility, and innovation in teaching.

The Student Government Association Academic Affairs Committee hosted two events, one to informally explore student textbook buying experiences and a second event to discuss their own experiences and reflect on their findings.

Students articulated a variety of observations about their own and others’ experiences:

[Students] are not buying books anymore. Books are important for our education. If you’re not buying books, you’re not learning very well.

I took a class where I had to buy eight books. [I] found the cheapest books [I] could find and the total was still $250. . . I don’t think professors know that $250 is a lot of money for [students].

I don’t want to have to work twenty hours a week to afford my textbook.

I had to spend $90 to rent an eBook in order to get a required software code to submit homework. I did not even get to keep the book.

People are going to spend the money on the textbook the professor selected.

I feel helpless. For four classes you have access codes. This leaves students without an option. You need to buy an access code. I don’t know how to get around this. It’s $300, buying access to weekly homework assignments.

Open Education Week Student Panel
Open Education Week Student Panel

Student panelists reflected on their and others’ understanding and responses to textbook buying problems:

I just want faculty to be aware that there is a problem. I have professors who put everything online. Then, I have professors who require purchase of a textbook and homework software.

I’m not privy to faculty pressures; is the content in new editions of textbooks substantially different? [Students] need to ask faculty in a way that is respectful to consider that in their decision-making.

Students mentioned a variety of faculty practices that have been helpful:

  • Recommending, but not requiring a book
  • Putting textbooks and readings on Reserve in the library (“I wish I would have known that these were available as a Freshman.”)
  • Linking content via Scholar (our LMS)
  • One professor mapped changes between different editions so that students could save money

(Slide from David Ernst CC BY; data from Florida Student Textbook Survey)

Students had various reflections on open textbooks/open educational resources:

I would encourage professors to give open educational resources a try, especially if they are teaching the same class year in and out. If they just tried it for a semester especially with these basic classes…the fundamentals don’t change.

Open textbooks mitigate cost. They also change the way we look at textbooks. How have consumers driven product development? Students need to understand their agency as consumers.

I want to tell professors that they would get more recognition if they reduce cost and increase access to [their authored] resources.

As Open Education Librarian, I led the workshop “Get Creative (and stay legal)” introducing educators and authors to OER and open licensing using Creative Commons (presentation slides; see the presentation video below).

Panelists from Virginia Tech and Virginia Military Institute shared their experiences and reflections on open educational resources as students, educators, researchers, authors, and adopters of open educational resources. Mohammed Seyam, doctoral student in Computer Science, discussed the value of openly licensed material as a student, research, and graduate assistant. Heath Hart, Advanced Instructor of Mathematics, reflected on his adoption of an open educational resource and a (subscribed) online textbook in “A Rousing Success and an Unmitigated Disaster.” Greg Hartman, Associate Professor of Mathematics at Virginia Military Institute, discussed his experiences authoring the openly-licensed (CC BY-NC) textbook, APEX Calculus. Peter Doolittle, Executive Director of the Center for Instructional Development and Educational Research, discussed the open education movement from a teaching and learning perspective, moving beyond just content into process.

(Mohammed Seyam 7:45-17:12, Heath Hart 18:06-32:47, Greg Hartman 33:04-50:34, and Peter Doolittle 51:25-1:03:11)

University of Minnesota Open Textbook Library founders David Ernst and Kristi Jensen presented workshops for instructional designers and librarians, highlighting the increased student cost of higher education as state funding decreases, the 800+% rise in the cost of textbooks (4 times the rate of inflation) since 1978, and how open textbooks and openly licensed resources can help alleviate the burden of textbook costs for students and provide faculty with customizable content. Faculty members from a wide variety of disciplines participated in an Open Textbook Adoption Workshop led by Dave Ernst. Faculty were introduced to open textbooks and their potential impact on access, academic success, and affordability, which is especially relevant in the context of rising student debt. Faculty were also invited to review an open textbook from the Open Textbook Library.For more from David Ernst on open knowledge and open textbooks, see his 2013 University of Minnesota TEDx talk and his 2012 Kyoto TEDx talk.

For more information on open educational resources and open licensing, see our OER Guide or contact Anita Walz at arwalz@vt.edu. The University Libraries are exploring additional ways to support faculty interested in open educational resources. We’d love to hear from you!

Posted in Open Educational Resources, Open Textbooks, University Libraries at Virginia Tech | Tagged , , | Leave a comment

Open Data Day/CodeAcross Event Recap

Blacksburg’s first celebration of Open Data Day and CodeAcross was organized by Code for NRV, our local Code for America brigade, and the University Libraries, which hosted the event in Newman Library’s Multipurpose Room. Originally scheduled for Saturday, February 21 (the official Open Data Day observed in hundreds of cities around the world), due to rapidly accumulating snow we had to postpone until Sunday. As it turned out, a water leak closed the library around mid-day Saturday, so things worked out for the best. (Our apologies to registrants for the sudden change in plans.)

Open Data Day logo

The first event of the morning was a mapping roundtable led by Peter Sforza, director of the Center for Geospatial Information Technology at Virginia Tech. In addition to looking at a lot of cool maps, we identified three potential areas for collaboration:

  • 3D Blacksburg – an effort to develop a common, shared 3D spatial reference model for Blacksburg and the New River Valley.
  • Contributing more authoritative data to OpenStreetMap for Blacksburg and Virginia by working with GeoGig.
  • Opening data that CGIT compiles for projects and research, for example crash data from the Virginia Department of Transportation.
Mapping Roundtable
Peter Sforza Leads the Mapping Roundtable

For the journalism roundtable, we were joined by Scott Chandler, Design/Production Adviser for the Educational Media Company at Virginia Tech, and Cameron Austin, former editor of the Collegiate Times. One problem the CT has is finding/keeping programmers to help with data, such as their academic salaries database. Code for NRV will try to help with recruitment. A database of textbook costs was identified as a possibility to work on that would be of particular interest to students.

Blacksburg town council member Michael Sutphin joined us for the public policy roundtable, which included interesting discussions of town planning notifications and ways to encourage citizen engagement (such as the underutilized site Speak Up Blacksburg). Some of the project ideas included:

  • Visualizations of the town’s historical budget data that could benefit the public and town officials.
  • Opening the raw data used to create tables and maps in the town’s comprehensive plans.
  • Analysis of emails to and from local government officials to create visualizations of the most commented on topics in the town, e.g. word clouds and tag lists.

Our hackathon emerged from the morning’s mapping roundtable, so perhaps it’s not surprising that the projects were geographic in nature:

  • One volunteer used the Virginia Restaurant Health Inspection API created by Code for Hampton Roads to create a map of Blacksburg restaurants and their health scores.
  • An architecture student started a project that will use open 3D geospatial data from Virginia Tech to design pathways that are sculpted for the landscape.
  • Researchers from the Virginia Bioinformatics Institute adapted a model used in Ebola research to optimize placement of EMS staging areas during flood emergencies in Hampton Roads, Virginia. The model uses open data sets like the location and elevation of every roadway in Virginia to determine which streets would still be navigable during a flood.
Waldo Jaquith
Waldo Jaquith

To kick off our events Friday evening, we were very happy to have Waldo Jaquith speaking on “Open Government Data in Virginia” prefaced by a brief introduction to Open Data Day/CodeAcross by Ben Schoenfeld, co-leader of the Code for NRV brigade. Waldo Jaquith is the director of the U.S. Open Data Institute, an organization building the capacity of open data and supporting government in that mission. See the video of his talk below.

Thanks to everyone who turned out Friday and/or Sunday!

Thanks to the University Libraries’ Event Capture Service for the video below.

Posted in Open Data, University Libraries at Virginia Tech | Tagged , , , , , , | Comments Off

Learn About Open Data at Open Data Day/CodeAcross!

Join us for Blacksburg’s first observance of Open Data Day/CodeAcross, organized by Virginia Tech’s University Libraries and Code for NRV, our local Code for America brigade, this Friday and Saturday, February 20-21, 2015. We will be one of more than 100 Open Data Day and CodeAcross events taking place around the world on February 21. We welcome area residents and local government officials as well as faculty, staff, and students at Virginia Tech to find out how open data can improve our community (coding not required!). Registration is requested to help us with logistics, and for VT faculty, NLI credit is available (look for the sign-in sheet as well).

Waldo Jaquith
Waldo Jaquith

Friday, February 20, 2015
5:30pm to 7:00pm
Newman Library Multipurpose Room (first floor)

To kick off our events, we are very pleased to have Waldo Jaquith speaking on “Open Government Data in Virginia” which will be followed by a brief introduction to Open Data Day/CodeAcross. Waldo Jaquith is the director of the U.S. Open Data Institute, an organization building the capacity of open data and supporting government in that mission. In 2011, in acknowledgement of his open data work, Jaquith was named a “Champion of Change” by the White House and, in 2012, an “OpenGov Champion” by the Sunlight Foundation. He went on to work in open data with the White House Office of Science and Technology Policy. Jaquith, a 2005 Virginia Tech graduate, lives near Charlottesville, Virginia with his wife and son.

Open Data Day logo

Saturday February 21, 2015
9:30am to 5:00pm (lunch provided)
Newman Library Multipurpose Room (first floor)
Registration requested!

Open Data Day/CodeAcross will offer three tracks for coders and non-coders alike. First, there will be a sequence of one-hour discussion roundtables led by experts on the relationship of open data with mapping (10am), journalism (11am), public policy (1pm), health (2pm), and research (3pm). Second, there will be a mapping project emerging from the mapping roundtable and lasting the rest of the day. Third, for the coders there will be a hackathon using open government data in Virginia. Around 4pm, we will gather together, talk about our projects and what we learned, and plan for the continuation of projects. Attendees may move between these three strands as they like- or just come for one roundtable. Lunch is provided! While all events are free and open to the public, please register online to help us plan for the roundtables, lunch, and wireless access for those without a Virginia Tech affiliation. If you have questions, please contact me, Philip Young at pyoung1@vt.edu or 540-231–8845. Hope to see you there! #OpenDataDay #CodeAcross

CodeAcross logo

Posted in Open Data, University Libraries at Virginia Tech | Tagged , , , , , | Comments Off

Open Education Week 2015 at Virginia Tech

The University Libraries is planning its first observance of Open Education Week (February 23-27) at Virginia Tech! Open Education Week is intended to raise awareness of open educational resources (OER), which include a wide range of teaching, learning, and research resources such as textbooks, videos, software, and course materials that are free to be used and re-purposed. All are welcome to attend and find out how adopting, adapting, and authoring openly licensed resources can advance learning and reduce costs for students. We’re especially pleased to have Kristi Jensen, Program Development Lead for the eLearning Support Initiative (University of Minnesota Libraries) and David Ernst, Chief Information Officer (University of Minnesota College of Education and Human Development) from the University of Minnesota Open Textbook Library who will host the workshops on Thursday and Friday.

  • Interested in learning more about the conceptual and practical aspects of open licensing? Join us on Tuesday, Feb 24 11am-12:30pm in Newman Library’s 1st Floor Multipurpose Room for the interactive presentation “Get Creative (and Stay Legal).”
  • Wondering what students have to say about affordability of learning resources and how this affects them? Come to the Student Government Association-hosted panel discussion on Wednesday, Feb 25th 12:30-1:45pm (Newman Library’s 1st Floor Multipurpose Room).
  • Want to meet faculty who have authored or are implementing openly licensed resources/open textbooks in their courses? Join us for an Open House at 3:30pm Wednesday, Feb 25th and stay for the panel discussion from 4-5:30pm (Newman Library’s 1st Floor Multipurpose Room). Panelists include Peter Doolittle, Mohammed Seyam, Heath Hart, and Greg Hartman (VMI).
  • Do you advise or assist faculty regarding learning materials or instructional design? Do you work or are studying instructional design? Join us for an interesting conversation about open educational resources (OER) and instructional design on Thursday, February 26th 11am-12:15pm (with Kristi Jensen and David Ernst, Newman Library’s 1st floor Multipurpose Room).
  • Do you work in an area academic library and want to be better equipped to talk with faculty about exploring open educational resources? Register for the Open Educational Resources for Librarians workshop on Thursday, February 26th 1-2:30pm (with Kristi Jensen and David Ernst, Newman Library Boardroom, 6th floor).
  • Are you a faculty member responsible for reviewing or selecting textbooks or other learning materials? Are you looking for other options or concerned about the costs for students? Join us for the Open Textbook Adoption workshop Friday, February 27th 9-11am. A limited number of $200 stipends are available for teaching faculty who apply, attend the workshop, and write a review of an open textbook (with Kristi Jensen and David Ernst, Newman Library’s 1st Floor Multipurpose Room).

Open Education Week at VT

Why are we hosting Open Education Week?

The cost of textbooks and learning resources is a burden for students:

Faculty are saving their students money by adopting OER:

And OER gives faculty opportunities for innovative teaching and learning:

  • Faculty may create a new resource or customize an openly licensed resource to make it best fit their learning objectives.
  • Openly licensed content may be copied, updated, reformatted, customized, and redistributed free of charge. This gives a tremendous opportunity to faculty (and students) who wish to integrate content into new and different learning resources (for example, changing the formatting of a book to enhance student interaction with the text, or pulling content into different platforms, or customizing problem sets, assessments, and links to other resources).

See our Open Education Week guide for more information on events as well as OER in general. Please contact Anita Walz at arwalz@vt.edu or 540-231-2204 with any questions. Hope to see you at one or more Open Education Week events! #openedweek #openeducationwk

Posted in Open Educational Resources, Open Licensing, Open Textbooks, University Libraries at Virginia Tech | Tagged | Comments Off

Removing the Journal Impact Factor from Faculty Evaluation

One barrier to open access publishing that receives a thorough debunking on an almost-daily basis, yet refuses to go away, is the journal impact factor (JIF). Unfortunately the JIF continues to be used in the evaluation of research (and researchers) by university committees (in hiring, or the tenure and promotion process) as well as by grant reviewers. This is a barrier to open access, in most cases, because the most prestigious journals (those with the highest JIF) often do not have any open access publication options. There are exceptions like PLOS Biology, but in general the focus on prestige by evaluators is slowing down badly needed changes in scholarly communication. It’s also a barrier because many open access journals are newer and tend to have a lower JIF if they have one at all (three years of data must be available, so an innovative journal like PeerJ won’t have a JIF until June 2015).

But even if the JIF weren’t posing a barrier to open scholarly communication, it would still be a poor metric to use in research evaluation. Here are a few of the reasons:

  • The JIF measures journals, not articles. It was never intended for use in evaluating researchers or their articles.
  • Because the JIF counts citations to a journal in the previous year and divides by the number of articles published in the two years prior to that, there is absolutely no relationship between a journal’s current JIF and a newly published article. For example, a journal’s 2014 JIF measures citations in 2013 to articles published in 2012 and 2011.
  • The distribution of citations to journal articles is highly skewed: one article may be cited hundreds of times, and others cited little or not at all. By using the mean as a descriptor in such a skewed distribution, the JIF is a poster child for statistical illiteracy.
  • The JIF is not reproducible. It is a proprietary measure owned by Thomson Reuters (Journal Citation Reports or JCR), and the details of its calculation are a black box. In particular, it is not known whether non-peer reviewed content in journals are counted as articles, and/or whether citations from that content are counted.
  • Citations can occur for a variety of reasons, and may or may not be an indication of quality.
  • The JIF only counts citations to an article in its first two years, so journals select articles that will cause an immediate buzz (though there is also a 5-year JIF). Meanwhile, solid research that might accumulate citations more slowly is rejected.
  • Citation rates vary greatly between disciplines. The JIF serves as a disincentive to do interdisciplinary research.
  • Reliance on citations misses the broader impact of research on society.
  • The JIF can be gamed by journal editors who suggest or require that authors cite other articles in that journal. In addition, review and methods articles are privileged since they are more frequently cited. The quest for citations is also why journals don’t publish negative results, even though those are important for science.
  • The JIF covers only about a third of existing journals, and is heavily STEM-centric.
  • The prestige of publishing in a high-JIF journal encourages bad science. Retraction rates have been correlated with the JIF. Researchers are incentivized to produce sexy but unreliable research, and to tweak the results (e.g., p-hacking).
  • Journal prestige is a cause of rising journal prices. If a journal is prestigious in its discipline, then it can charge what libraries can afford to pay, rather than for the work required to produce the journal (though also more expensive because high-JIF journals spend so much time and effort rejecting papers which are then published elsewhere). Journal prices have outpaced the consumer price index for decades.

So how is the JIF used in libraries, where it was intended for use in journal evaluation? Here at Virginia Tech, it’s not used at all in journal selection, and is only one of many considerations in journal cancellation. It’s ironic that the JIF has so little use in libraries while becoming so influential in research evaluation. Why should libraries care about this? To some extent, because “our” little metric somehow escaped and is now inflicting damage across academia. More importantly, we often encounter faculty who don’t fully understand what the JIF is (only that it should be pursued), and, as mentioned at the beginning, the focus on JIF is a real barrier as we advocate to faculty for the open dissemination of research.

Why is the JIF so appealing? Convenience no doubt plays a major role- just grab the number from JCR (or the journal, many institutions can’t afford JCR). After all, it’s a quantitative “factor” that measures “impact” (to three decimal places, so you know it’s scientific!). And if the JIF wasn’t problematic enough, it’s now being incorporated into world university rankings.

How can we address this problem? One attempt to address the misuse of the JIF is the San Francisco Declaration on Research Assessment (DORA), which gives this general recommendation:

  • Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

DORA continues by giving specific recommendations for funding agencies, institutions, publishers, organizations that supply metrics, and researchers.

DORA

For institutions, DORA has two recommendations:

  • Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
  • For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

Virginia Tech should consider signing DORA and making that known, as University College London just has. But more importantly, it should also become official university policy, making it clear to all faculty that the JIF should not be used in hiring, tenure, or promotion.

For those interested in exploring further, here are just a few of the most recent commentaries I’ve come across (and to discuss further, consider my upcoming NLI session):

Nine Reasons Why Impact Factors Fail And Using Them May Harm Science (Jeroen Bosman)
Why We Should Avoid Using the Impact Factor to Assess Research and Researchers (Jan Erik Frantsvåg)
If Only Access Were Our Only Infrastructure Problem (Bjorn Brembs, slides 4-23)
Misrepresenting Science Is Almost As Bad As Fraud (Randy Shekman)
Deep Impact: Unintended Consequences of Journal Rank (Bjorn Brembs, Katherine Button, Marcus Munafo)
The Impact Factor Game (PLOS Medicine editors)
Everybody Already Knows Journal Rank is Bunk (Bjorn Brembs)
Sick of Impact Factors and Coda (Stephen Curry)
Excess Success for Psychology Articles in the Journal Science (Gregory Francis, Jay Tanzman, William J. Matthews)
Assess Based on Research Merit, Not Journal Label (David Kent)
Do Not Resuscitate: The Journal Impact Factor Declared Dead (Brendan Crabb)
High Impact Journals May Not Help Careers: Study (Times Higher Education)
Choosing Real-World Impact Over Impact Factor (Sam Wineburg)
Journal Impact Factors: How Much Should We Care? (Henry L. Roediger, III)
How to Leave Your Impact Factor (Gary Gorman)
The Big IF: Is Journal Impact Factor Really So Important? (Open Science)
End Robo-Research Assessment (Barbara Fister)

(I’m sure there are many, many more I missed!)

Posted in Research Evaluation | Tagged , , | Comments Off

Book Review: Open Access and the Humanities

Open Access and the Humanities

Martin Paul Eve, Open Access and the Humanities: Contexts, Controversies and the Future (Cambridge: Cambridge University Press, 2014).

Martin Eve’s new book is a welcome examination of the unique challenges that the humanities face in open access publishing. Appropriately enough, it is published open access by Cambridge University Press (as separate PDFs, or download the full PDF from the Internet Archive) under a Creative Commons Attribution-ShareAlike license (CC BY-SA 4.0). Dr. Eve is a Lecturer in English at the University of Lincoln, co-directs the Open Library of the Humanities (OLH), and frequently speaks and blogs about open access.

Some of the challenges for the humanities in open access publishing become apparent in Peter Suber’s preface. In addition to a general lack of funding in these fields, humanities journals tend to have high rejection rates, so article processing charges (APCs) become a non-starter.  The fee-based model works best in well-funded fields with relatively low rejection rates.  In addition, it is books, not journal articles, that are of greatest importance.

Therefore it is no surprise that Eve’s chapters on economics and monographs (Chapters 2 and 4) serve as the heart of the book. Eve hypothesizes three economic models (p. 57) in which authors are paid for their work, publishers are paid for their services, and libraries provide cooperative funding. Those familiar with the OLH’s funding model won’t be surprised to find the first two models dismissed in short order, and the cooperative model examined at length. Indeed, the cooperative model has already been used to support arXiv, SCOAP3, and most recently Knowledge Unlatched, albeit in a pilot effort. In considering this model, Eve begins by stating that there is already enough money in academic publishing to cover the current production of articles and monographs. Problems of transition funding are addressed, though to me the most interesting problem is how to handle free riders in cooperative efforts (p.75):

The first of these difficulties, the so-called ‘free-rider’ problem, relates to the economic understanding that rationally self-interested actors do not wish to pay for commodities from which others benefit for free. In other words, except in philanthropic modes or systems of taxation for public good, most people usually resist paying for goods for which only they pay, but from which non-purchasers also derive benefit. This results, for gold open access publishing, in a kind of prisoner’s dilemma. If all library entities behave in a purely self-interested way and disallow free riders, these collectively underwritten, non-APC models cannot emerge. Admittedly, the increasing enclosure of universities within market logics doubtless makes it harder for acquisition librarians to justify expenditure on projects where there are free riders to senior managers.

I think there may be additional problems for libraries here (enough for a separate blog post), and I hoped for a fuller examination of this voluntary contribution model, though recent efforts, as Eve notes, have been successful.

Chapter 3 on open licensing helpfully outlines the three current types of copyright and licensing agreements authors will encounter: full copyright transfer, an exclusive license to publish, and a non-exclusive license to publish (p. 88). This chapter also does a good job of describing how copyright restricts the uses of research, and discusses some common objections to CC licenses, such as concerns about derivatives, undesirable re-use, and commercial use.

Eve explains the economic problems of monograph publishing in chapter 4 and offers an overview of four current models of providing open access to them (p. 130-135): print subsidy, institutional subsidy, freemium, and collective funding. I was unaware that publishers pay for peer review of monographs, which contributes to their higher production costs and resulting book processing charges (BPCs) that are largely unaffordable for humanities authors. I was glad to see two issues addressed in this chapter. The first is the role of commercial presses in determining what is published (and therefore who is hired or receives tenure) based on evaluations of potential sales. The second is the role of scholarly societies (p. 128):

…calls to protect society revenue models are often inextricable from calls to protect publisher profits; the two are interwoven. This rhetoric of economy and sustainability, it must also not be forgotten, will always make one group’s sustainability possible only at the expense of another: usually the library.

There are interesting discussions of the advantages and disadvantages of archiving (“green” open access) in the humanities, though it seems that there are more of the latter. Archiving is often not permitted for monographs, pagination is lost, and the strain on library budgets goes unaddressed. Strangely, Eve omits the biggest problem, which is that a majority of authors have permission to archive journal articles, yet so few actually take advantage of it. Additionally, in proposing an archiving workflow for authors, Eve suggests checking SHERPA/RoMEO for permissions. While that is an invaluable resource, I think it is more important that authors read (or at least quickly review) their publishing contract for archiving rights.

Eve’s final chapter concerns innovations, and mostly addresses peer review. While emphasizing that open access does not require changes in peer review, he argues that the opportunity to reconsider other publishing practices should not be missed, and in particular to address the false perception that open access means lower quality. Critiques of “filter-first” peer review are echoed. Two explorations in this chapter are especially interesting: first, how the PLOS One type of review for technical correctness rather than importance might be translated to humanities fields; and second, how assumptions that speed and precedence are necessary only for the sciences might be mistaken.

Eve’s writing style is clear, though over-use of comma phrases tended to bog this reader down at times, and there is the occasional odd construction (“greenly depositing,” p. 11). The book ends somewhat abruptly and might have benefited from a concluding chapter to help synthesize the many aspects of open access publishing for the humanities. The glossary (p. 179-181), while helpful in itself, defines some terms so minimally as to lead to further questions. In addition to the glossary and Suber’s preface, the book contains notes, a bibliography, and an index (which can be supplemented by searching the PDF version).

Despite these minor criticisms, Open Access and the Humanities is thought-provoking and remarkably balanced, perhaps due to Eve’s dual role as open access advocate and publisher. Eve approaches all of these complex issues in a spirit of philosophical investigation, and does not avoid examination of related issues such as academic freedom and research assessment. A broad audience of humanists, publishers, and librarians will find value in this exploration of open access for humanities disciplines, and many of them will no doubt be cheering for the success of the new publishing models it describes, such as the Open Library of the Humanities.

Posted in Book Reviews, Business Models, Open Access, Open Books | Tagged , , | Comments Off

OpenCon 2014 Reports from Virginia Tech Grad Students

As part of Open Access Week, the University Libraries and the Graduate School offered two travel scholarships to OpenCon 2014, a conference for early career researchers on open access, open data, and open educational resources. From a pool of many strong essay applications, we chose Jessie Gunter, a master’s student in public health, and Mohammed Seyam, a Ph.D. student in computer science. Jessie and Mohammed attended the conference in Washington, D.C. November 15-17, and sent the reports below. Be sure to check out the OpenCon 2014 presentation videos and slides.

Jessie and Mohammed
Jessie and Mohammed at Senator Tim Kaine’s office

Jessie Gunter writes:

OpenCon 2014 surpassed all expectations I could have had for a conference about Open Access, Open Education, and Open Data. The enthusiasm from all of the conference attendees was contagious and empowering, and I gained a much more nuanced understanding of all things “Open” thanks to a variety of incredible speakers, panels, workshops, and coffee break conversations. A part of the conference that I found particularly useful as an early career researcher and student was a workshop with Erin McKiernan called “How to make your research open.” The major takeaways from this, for me, were:

  • Check out peer-reviewed Open Access journals and publishers. PeerJ, PLOS, and eLife are examples of innovative OA publishing models with strong peer review. Also check DOAJ, the Thomson Reuters Open Access Journal List, and the Cofactor Journal Selector Tool to find journals that you might want to publish in.
  • If you don’t or can’t publish Open Access, check Sherpa/Romeo to see each publisher’s copyright policies on pre-prints, post-prints, and self-archiving. If the journal in which you are publishing allows pre-prints, for example, get your work out there to advance knowledge!
  • Consider using GitHub for code and data archiving and collaborative research tools such as writeLaTeX, shareLaTeX, or Authorea for writing, figshare for all kinds of self-archiving.
  • Explore your licensing options at creativecommons.org/choose (*Be mindful that choosing an option that prevents commercial entities from reusing your work would prevent your research from being used in university courses, Wikipedia…).
  • Consider using WordPress or another blogging platform to explain your work to non-experts.
  • Track your impact! with Altmetric and showcase your work and citations.

OpenCon 2014

Mohammed Seyam writes:

Day 0
On my way to Washington DC to attend OpenCon 2014 I was ready to attend a “regular” conference, where speakers have pre-planned speeches and where audience keep wishing for 1-2 exciting speeches per day! Now, after the 3-day conference is over, I can say that OpenCon 2014 was the conference that every single session of its rich program was a real value added besides being exciting enough to keep us all fully engaged with every speaker. Moreover, the conference wasn’t only about speakers and sessions, it was also about people getting to know each other and learning from the diverse international experiences of the participants. This fine mix of sessions and social activities resulted in one of the most remarkable events I ever attended.

The “different” atmosphere began with the welcome social on the day we arrived at the hotel. The organizers decided to gather participants in an open space to get introduced to each other and provide some sort of background before delving into the actual program events. On that 2-hour social, I got to know people from at least 8 different countries, as well as students from several universities in the USA. For me, that was an indicator that a very special event was to come.

Day 1
The first day of the conference – which was held at the Washington College of Law at American University – started with Michael Carroll, who gave a brief introduction to the conference. However, one of his quotes got me into the conference mood, which was: “You can’t just think open access is a good idea – you have to believe it.” From that talk on, the belief in open access began to grow stronger in all the conference participants. After Carroll, it was Patrick Brown‘s turn to talk on founding PLOS (Public Library of Science), and how to raise awareness and increase public interest in open access issues. Petitions, advocacy sessions, and direct talks are all ways to support open access, even if with no or minor instant outcomes. His most inspirational quote on that topic was: “If you don’t believe you’ll succeed, nobody else will!”

After Brown’s keynote, there was a panel on “The state of Open” where speakers presented the current state of open data, open education, and open access in and out of the USA. The first speaker was Heather Joseph, the Executive Director of SPARC (Scholarly Publishing and Academic Resources Coalition). She talked about SPARC’s goal: Set the default to open. She also presented the state of open in the USA. During her talk, Joseph stressed that open access doesn’t mean only to be able to access the material, but also to be able to fully reuse it. In the USA, there are 50 institutions that have financial support for those who want to publish in open access journals (Virginia Tech is one of them!). As for the legal issues, Joseph showed that California became the first state to mandate open access to taxpayer funded research.

Iryna Kuchma then presented the state of open with an international view. Kuchma is the open access programme manager at EIFL (Electronic Information For Libraries). Besides the many numbers she showed and the different approaches she followed in advising for open access, it was important to announce that open access is now required by law in Mexico, Argentina, and Peru.

Ross Mounce then had an interesting presentation on the state of open data in research. Since Mounce is a postdoctoral researcher, he had deep insights regarding the open data issues. He simply believes that if a researcher doesn’t share his data, his research can’t be trusted (or believed!). The large size of data shouldn’t be a barrier, as some MRI scans with a total of 39 Gigabytes were just shared openly online. Mounce is against PDF and other closed source formats, and he advises to go with open formats such as csv.

The interesting panel ended with Nicole Allen, the Director of Open Education at SPARC. Allen’s main point involved open textbooks, and that was so important as the costs of textbooks are rising at almost three times the rate of inflation! Simply put, students can’t learn from materials they can’t afford. Therefore, with the explosion of openly licensed content during the last few years, it’s clear that open textbooks will be the next big step towards open education.

The second keynote speaker of the day was Victoria Stodden, the associate professor in the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign. Her talk was about the importance of reproducibility of research, as well as the current trials of several USA organizations to create methods to organize the copyright issues related to open access. She sees that reproducibility and open access are sister issues. She then demonstrated Mertonian norms, which are communalism, universalism, disinterestedness, originality, and scepticism. These norms were also discussed by Brian Nosek in his open access week keynote at Virginia Tech. Stodden showed that skepticism requires transparency in the communication of the research process. She also talked about the importance of having senior researchers helping to move forward on the open access road, as the young researchers may not be able to do it all on their own. Stodden finally shared some facts regarding the steps that the Obama administration had taken to support open access in some government organizations.

After lunch, Uvania Naidoo from the University of Cape Town (South Africa) led a workshop on open access on the context of developing countries. It was important to highlight the case of open access in countries that already have problems with “access” itself. In my sub-group, we heard from a Mexican lawyer about his story on making open access mandatory for publicly-funded research in Mexico. Since many of the conference attendees were not from the USA, this session was full of personal stories on how the world sees open access, and it was full of different experiences that add a lot to the overall conference theme.

The second panel of the day was about “innovative publishing models”, and it was moderated by Meredith Niles, a postdoctoral research fellow at Harvard University. The first speaker was Arianna Becerril from Redalyc, who talked about her organization that provides an open access from Latin America, which is non-commercial and regional platform. For her, science that’s not seen, doesn’t exist! After Becerril, Peter Binfield, the cofounder of PeerJ presented his thoughts on open publishing, emphasizing the PeerJ history and future. He discussed the debate on open peer review, and provided some arguments that encourage reviewers and researchers to consider open peer review to be the default reviewing system. He also demonstrated some alternatives to the impact factor system, and presented the PeerJ reputation system that provides incentives for researchers to openly share more of their work. Then Martin Paul Eve from University of Lincoln in the UK and the cofounder of the Open Library of Humanities presented what he believes to be the three main problems of open access, which are: researcher access, public access, and reusability. Since most of work in humanities is not funded, the article processing charges are unaffordable for humanities researchers. That was the reason that led Eve and cofounders to work on the Open Library of Humanities to overcome such problem. Finally on that panel, Mark Patterson, the Executive Director of eLife, presented his view on how funders can take action to support open access. He also shared his thoughts on the traditional way of research assessment based on impact factors, seeing that as the real barrier to overcome, and providing some alternate initiatives led by eLife publishing system.

The last panel of the day was on “Impact of Open”, moderated by Erin McKiernan, a postdoctoral fellow at Wilfrid Laurier University. The first speaker on this panel was Daniel DeMarte, the VP for Academic Affairs at Tidewater Community College. He presented his college’s impressive experience of adopting Open Educational Resources (OERs), which led to saving its students around $250,000 a year! The question that came to the minds of almost all the attendees was: how many millions could be saved if OER were used in many institutions?!

The second speaker was Jack Andraka, the winner of Intel’s 2012 International Science and Engineering Fair for his pancreatic cancer research and testing tool. He told the story of his breakthrough cancer diagnostic which I already knew, but it was new for me to know the importance of one open paper published on PLOS for his research. Therefore, Andraka was a very good example on how open research can affect human lives, and the cancer diagnostic test is a clear proof of this.

Peter Murray-Rust from the University of Cambridge then presented an expert view of how open is important for life. He hopes for a revolution in the world of publishing, as he believes that closed access means people die. He also sees that some laws can be broadly interpreted to help save more lives. Murray’s talk was a real inspiration for many of the attendees, and I believe he affected the minds of how youth should deal with the open access current and future state.

The last keynote of the day was by John Wilbanks, who discussed his view on “Open as a platform”. He emphasized the role of reusable data over open data, and that’s when he asked a simple question: “which is more valuable: Google or Google Scholar?” He also believe that a winning strategy for designing for open access would be “let’s create more value for the user”, not the “let’s build an open ecosystem” one. He concluded his talk with the legal issues, and the steps that should be taken to push policy makers to consider the open access bills and mandates to better serve people and communities.
This wonderful first day of OpenCon 2014 ended with a nice reception at Old Ebbitt Grill in downtown Washington DC, were we get to know more about those who we didn’t have the chance to talk with during the day. During that evening reception, most of the conversations were views, feedback, ideas, and questions about the day’s sessions. I think that social was more of a bonus session for the day, rather than a final break!

Day 2
Based on the great first day, the expectations of the second day were very high, and the excitement was clear on the faces of attendees during the breakfast. Frankly, the second day didn’t disappoint us, and it was as rich as the first day, thanks so the excellent organizing team.

The first key note was for Audrey Watters of Hack Education, who were concerned about moving from “open” to “justice”. She began the talk with a definition of “Openwashing”, which she defines as: having an appearance of open-source and open-licensing for marketing purposes, while continuing proprietary practices. Then she continued presenting her ideas on how data is not neutral and that injustices can be embedded within data. She believes that it’s not enough to have open access, because political engagement and social justice are still needed to get the full benefits of being open.

The second keynote was by Erin McKiernan, a postdoctoral fellow at Wilfrid Laurier University. As an early career researcher, she provided how young researchers can support and work for open access while maintaining their jobs and future. She provided some tips for researchers on how to avoid the dilemma of “worshipping” the impact factor, as she believes that impact factors have nothing to do with academic quality. She showed her open pledge, concluding it with a quote to remember: “If I’m going to make it in science, it has to be on terms I can live with”. McKiernan also provided some tips to use when a young researcher introduces open access to his/her advisor, examples are: be concise, include data, explain the benefits, list the different options and talk to the advisor early.

The R2RC (Right to Research Coalition) then honoured Melissa Hagemann of the Open Society Foundation for being a foundation of the open access movement on the first awards given by R2RC. I believe this is a good first step towards recognizing those who provided much of their time and effort to advocate for open access for human wellbeing.

Following the awards ceremony was a panel were students and early career researchers have presented their success stories in working for open access. Stories from USA, Kenya, Nepal, Nigeria, and Australia have been presented and showed how great students can achieve because of their enthusiasm and belief in the open access idea. It was also noticeable that open access worked well even in developing countries that lack “access” in many cases. However, it’s all about how people work together and not accepting “No” as an answer until they achieve their goals.

The third keynote of the day was by Phil Bourne, the first Associate Director for Data Science at the NIH (National Institutes of Health). Bourne was concerned about how crazy and broken the system is, and how open access is important to human health. He showed the top-down and bottom-up approaches that can be used to facilitate data sharing. He, as more than one of the previous speakers, thinks that money is important but it won’t solve all the problems when it comes to being open. He emphasised the roles of government as well as promotion and hiring committees in encouraging researchers to openly share their data and research.

After lunch, there was a workshop by Erin McKiernan and Ross Mounce on how to make a research open. They provided a roadmap and several tools that can help any researcher to go open with his research. The main steps they presented were: find a journal that allows preprints (like arXiv, figshare and PeerJ), choose a journal to publish in (using DOAJ for example), explore licensing options (on Creative Commons), self-archive the research (on PeerJ, figshare, university repository, or a personal website!), and write blogs to explain the research (on WordPress for example). Many tools and links were given on this workshop that I believe was of a great value for many of the attendees.

Peter Binfield had a second appearance but as a keynote speaker this time; where he talked about megajournals. He showed how important to fight for “impact neutral” journals. He also thinks that the rejected papers, rejection letters, and paper reviews are all valuable to be shared publicly online. Coming from the PeerJ, Binfield had some deep insights on how the publishing system works, and how it can benefit from open access movement.

Two panels were then held back-to-back where some of the participants presented their projects that support open access. Again, it was amazing and very inspirational to see how researchers and students can achieve based mainly on their own beliefs.

The final keynote for the day was given by Carolina Botero, who was supposed to talk about open access in Latin America. However, her talk covered many aspects related to open access in general, especially the legal issues and how to deal with governments to create laws that mandate open access. She also talked about the Colombian grad student who faces jail time for sharing a thesis online. She concluded her talk by emphasizing the role of young researchers and activists in raising public awareness on the importance of open access and how it affects people lives.

That day was concluded by an “unconference” session, where informal meetings were held at the hotel so that subgroups can discuss whatever topics they feel interested in. One of active tables was the one that discussed “Open access in humanities”, where the participants showed their concerns about sharing preprints because in the humanities it’s like sharing their own thinking process. They also discussed the review process and how open reviews would help their research field.

Day 3
The third – and last – day began early at the Hart Senate Office Building, where we were assigned to certain groups for specific tasks. That day was called the “Advocacy day”; as we were supposed to meet with legislators to let them know more about open access and encourage them to move forward on the process of mandating openness. We had a short – yet entertaining – talk that was full of information by Amy Rosenbaum (Deputy Assistant to the President of the United States for Legislative Affairs). She talked about her past relationship with SPARC, and then she gave some quick tips on how to deal with legislators for such advocacy meetings. We also had some tips from the organizers that helped us on our advocacy meeting. Our group was supposed to meet with Senator Tim Kaine (D-VA) at the Russell Senate Office Building, but since he wasn’t available we met with his legislative aide. With three of our group coming from universities in Virginia (Virginia Tech and the University of Virginia), Senator Kaine was quite familiar for us and his interest in education topics encouraged us to discuss the open education policies with his legislative aide. It was a very productive meeting as everyone in the group presented personal experience with the topic, and the aide was eager to know more, promising that she’ll move all our thoughts to Senator Kaine when he’s back in the office. One of her interesting questions was about if there were any state that had some bills related to that issue, and our answer was California. So she asked us to provide her with some details and we promised to send her such details on her email.

After lunch, we moved to the US Department of State, where we met with members working on the Open Government Partnership. All of us were international students, so the members began to talk about their efforts in working with several countries on open access issues, and they wanted to listen to our experiences and how they can work with our governments to support open access. They showed that Obama’s administration is determined to work on open access issues as part of its mission, but things move so slowly when it comes to legislation. However, they have achieved success related to the Open Government Initiative, and they were optimistic about the future of open access in the USA and some other countries that they are working with.

That great day ended with a dinner at the University Club, where it was clear how “unique” that conference was. Although it was the same friendly atmosphere of the very first social, one could easily see that many real friendships have been created during the 3-day period. Some very inspiring talks were given by the organizers, followed by quality time for participants sharing their final small talks with each other. Some of the next steps were introduced based on the momentum of the event, and many online communication mediums were put to use.

Final Thoughts
On many of the breaks and subgroup talks, I’ve always been proud to talk about Virginia Tech’s initiatives to encourage open access. VT libraries open repository, VTechWorks, was always appreciated by the audience, as well as the VT initiative to fund researchers who want to publish in open journals. Open textbooks were introduced to me just a few weeks before attending OpenCon 2014, and I was proud to demonstrate how VT is already taking steps towards open textbooks while they were presenting that topic during the conference. Moreover, VT activities during Open Access Week helped me to easily get involved in some topics that needed deep understanding, and that was a very good example of how universities can advocate for open access through such activities and sessions. Since many other universities in the USA didn’t have any policies or even ideas for open access, I was glad that VT (and VT libraries) considered working on open access early, which shows the vision that our administration has. This vision, I believe, was the reason that made VT libraries decide – together with VT Graduate School – to support two students to attend this conference. I was glad when I got accepted for such scholarship, and I was excited to participate, but now I’m inspired and enthusiastic to work with VT libraries to find ways to advocate and encourage VT staff members and researchers to openly publish their work. I’m also very interested to work on the legislative level, especially in Virginia, to draw the legislators’ attention to the importance of open data and open education for the community as a whole, not only for university and researchers. I believe that the experience I got from OpenCon 2014, together with the connections I’ve made with other open access activists, will help me a lot to work on this during the rest of my time at VT.

OpenCon 2014
OpenCon 2014 by Aloysius Wilfred Raj, CC-BY 2.0
Posted in Open Access, University Libraries at Virginia Tech | Tagged | Comments Off

The Research Data Assessment Survey at Virginia Tech

An e-mail about the current university-wide research data survey was recently sent to most faculty members at Virginia Tech (specifically, it was sent to Research Faculty and Collegiate, or Teaching & Research, faculty). The survey addresses many aspects of data management, including data sharing, which is receiving increased attention from research funders as well as journals. Open data is important for research integrity, reproducibility, meta-research, and accelerating discovery. Some have gone so far as to say that a peer-reviewed article is just advertising for the important stuff- the data. Yet there are numerous barriers to sharing (many of them quite valid) that we need to understand.

The survey results will help us identify how data are being stored, managed, shared, and reused by faculty at Virginia Tech. It will also help us gain a better understanding of data management needs and attitudes towards data sharing and discovery.

The survey is sponsored by the Office of the Vice President for Research and the University Libraries, and administered by the University Libraries. Participation is voluntary and will be recorded confidentially, and no personally identifiable information will be revealed. If you have questions, please contact Yi Shen at yishen18@vt.edu or 540-231-5329.

Helping researchers manage data is a top priority for the University Libraries. Your input is essential to help us serve your needs. If you’ve already completed the survey, thank you. If you haven’t, please take a few minutes to complete it- thanks!

Posted in Open Data, University Libraries at Virginia Tech | Tagged | Comments Off

OA Week Event: Faculty and Graduate Student Panel

Our panel of faculty and graduate students is one of the most interesting events of every Open Access Week, and the 2014 version did not disappoint. In the past we’ve hosted separate, consecutive panels, but this year we decided to combine the panels into a single, shorter event.

Faculty and Graduate Student Panel
Faculty and Graduate Student Panel,
Open Access Week 2014 at Virginia Tech

Our faculty panelists were Iuliana Lazar (Biological Sciences), Nicolaus Tideman (Economics), and Randy Wynne (Forest Resources and Environmental Conservation). They were joined by our student panelists, Christian Matheis (Ethics & Political Philosophy, and editor-emeritus of SPECTRA), Caitlin Rivers (Computational Epidemiology, Network Dynamics and Simulation Science Laboratory), and Michelle Sutherland (Educational Media Company, and former editor of Philologia).

Dr. Tideman had several interesting comments to make about the role of copyright in scholarship, which might be summed up by saying that copyright is inappropriate for academia. Dr. Wynne shared concerns such as reproducibility, data citation, and access to research in the developing world. For Caitlin Rivers, who is working on Ebola epidemiology, the data she uses is open, so it only makes sense that the output is too, and it must be available to people in west Africa. When Michelle Sutherland graduated, she lost access to most peer-reviewed research. This is a point that should be made more often, and it is an irony that this happens after four years of instruction from faculty and librarians on finding and using peer reviewed research. Asked what they do when they encounter paywalls, panelists had a variety of responses, from using the Twitter hashtag #icanhazpdf and sharing personal subscriptions among several people, to searching Google Scholar and research networking sites. For the full discussion, see the panel video below. Thanks very much to our panelists for the insight and discussion!

Thanks to the University Libraries’ Event Capture Service for the video below.

Posted in Open Access Week, University Libraries at Virginia Tech | Tagged | Comments Off

OA Week Event: Keynote Address by Brian Nosek

Brian Nosek, Professor of Psychology at the University of Virginia and co-founder and director of the Center for Open Science, gave the keynote address for Open Access Week 2014 on Monday night, October 20. “Scientific Utopia: Improving the Openness and Reproducibility of Research” noted the gap between scholarly values and how scholarship is actually carried out, and described how the Open Science Framework can help address this issue.

Brian Nosek at Virginia Tech
Brian Nosek, Open Access Week Keynote Address at Virginia Tech

The presentation began with a slide listing the norms (idealistic values) and counternorms (what often happens) of scholarship as opposing pairs, for example communality vs. secrecy. Looking at the counternorms, it was easy to see that these behaviors are aligned with academic incentives and “getting ahead” in general. Nosek also showed the amusing, if disheartening, results of a study comparing researchers’ agreement with the norms, how well their own practices align with the norms, and how well they think the practices of others align with the norms. He then identified current problems in the published literature of positive results and low power, variability in analysis, and selective reporting.

The Center for Open Science helps enable reproducibility, registration, and openness by making them part of the research workflow. COS endeavors to provide the technology to enable change, the training to enact change, and the incentives to embrace change. The technology is the Open Science Framework, which provides versioning, documentation, and other services in addition to connecting parts of a project together (Dropbox, figshare, etc.). COS offers training in statistics, tools, and workflows both online and in-person. And it’s working on incentives such as usage statistics, badges, and registered reports. Interestingly, registered reports move peer review after the design phase rather than after writing the report, addressing the negative results/selective reporting problem. The current incentive in academia is to get published, not to get it right, but COS is helping to change that.

Brian Nosek’s keynote address was delivered to a packed room- we counted 120 attendees. Thanks to everyone who turned out!

Thanks to the University Libraries’ Event Capture Service for the video below.

Posted in Open Access Week, University Libraries at Virginia Tech | Tagged , | 2 Comments