Inspired by the Association of Southeastern Regional Libraries webinar, “Adding Patent Records to Clemson’s IR — Highlighting the University’s Output,” VTechWorks, Virginia Tech’s institutional repository, now offers a similar collection, Virginia Tech Patents. The collection contains 645 U.S. Patents assigned to Virginia Tech at the time of patent application. The dates of issuance span 1919-2016. The collection’s display is customized with fields, search filters, and facets particular to patents, such as patent type, inventor, assignee, patent and application numbers, and patent classifications. Our motivation for creating the collection was that a sizeable collection of useful public domain content could be harvested programmatically and that it provides an opportunity to spotlight how Virginia Tech “invents the future.”
To enable other repositories to develop a similar collection, we offer our software, Patent-Harvest, in a GitHub repository. Patent-Harvest contains a Java program written to harvest all patents with Virginia Tech as the assignee. It can be adapted to harvest patents and associated files for other organizations or search parameters.
The harvesting program uses the PatentsView API to retrieve relevant metadata for all Virginia Tech patents and outputs a CSV spreadsheet. If desired, all the corresponding files for each patent are also downloaded and logically renamed. Since most United States patent documents are image-only PDFs, a script is included that uses optical character recognition to read text content and embed it in the patent documents. This makes the text of the patent documents searchable, but doesn’t change how they appear to the reader.
Happy Open Education Week! 2017 marked the fourth year of celebrating international Open Education Week at Virginia Tech. The Open Education Week planning committee set goals to meet felt needs of faculty on campus and to encourage student communication with faculty regarding the impact of learning resources on student learning.
Cost is always an issue. The committee agreed that we wanted to do something more positive than focus on barriers to learning, so we chose the theme “The Potential of Open Education.” What is Open Education anyway? Open Education includes pedagogies, practices, and resources which reduce barriers to learning. “Open Education combines the traditions of knowledge sharing and creation with 21st century technology to create a vast pool of openly shared educational resources, while harnessing today’s collaborative spirit to develop educational approaches that are more responsive to learner’s needs.” Source: Open Education Consortium
Two faculty and graduate student oriented events featured local and invited speakers, including live and live-streamed:
Seven Platforms You Should Know About: Share, Find, Author, or Adapt Creative Commons-Licensed Resources
Thanks to Kayla McNabb for setup of the video below and Neal Henshaw for editing.
Virginia Tech Open Educational Resource (OER) authors, adapters, and authors and several students discussed the use, benefits, challenges, and opportunities related to using or adapting openly licensed course materials for couse use. Panelists included Jane Roberson-Evia (Statistics), Mary Lipscombe (Biological Sciences), Stephen Skripak (Pamplin), and Anastasia Cortez (Pamplin). Publishing expert Peter Potter (University Libraries), and students Mayra Artiles (Doctoral student, Engineering Education), and Jonathan de Pena (Senior, Finance) also joined the panel, moderated by Anita Walz (University Libraries).
Virginia Tech’s Student Government Association (SGA) designed the Open Education Week exhibit to educate and to solicit visitor input. The interactive exhibit features a range of required student learning materials including textbooks, homework access codes, software, and clickers, visual representations of data related to course material costs and student responses, information about open education options, a new Creative Commons brochure, CC stickers, and several interactive features. Students also have the opportunity to write a personalized message on an SGA-designed postcard to their professor, department head, or whomever they want to contact.
A selection of resources used in the exhibit are linked here:
Florida Virtual Campus (October 7, 2016) 2016 Student Textbook and Course Materials Survey. Available here.
National Association of College Stores (2011) “Where the New Textbook Dollar Goes” Used with Permission of NACS. (No updated data available). Available here.
Senack, Ethan. (January 2014) Fixing the Broken Textbook Market: How students respond to high textbook costs and demand alternatives. U.S. PIRG Education Fund & the Student PIRGs: Washington, DC. Available here.
Senack, E., Donoghue, R. (2016)Covering the cost: Why we can no longer afford to ignore high textbook prices. Student PIRGS: Washington, DC. Available here.
U.S. Bureau of Labor Statistics, as quoted by Popken, B. in “College Textbook Prices Have Risen 1,041 Percent Since 1977“ NBC News (August 6, 2015). Available here.
SGA also hosted a Multimedia Event. This student led engagement event featured multiple interactive stations where students could discuss, answer questions, take pictures, and write postcards. Two wordcloud prompts in particular were telling: “Where would your money go if you didn’t have to buy textbooks” — with the top two answers by far reflecting daily living expenses — “food” and “rent.”
Students were also asked to reflect on how they avoid buying full price textbooks. Responses included “Rent [textbooks],” “go without,” “hope for the best,” “borrow them from a friend,” and “buy used.”
The Open Education Week at Virginia Tech planning committee for 2017 included: Anita Walz (Chair), Kayla McNabb, Quinn Warnick, Anna Pope, Anne Brown, Kimberly Bassler, and Craig Arthur.
Exhibit curators: Virginia Tech Student Government Association: Anna Pope, Kenneth Corbett, Spencer Jones, Holly Hunter, and Sydney Thorpe with the University Libraries’: Scott Fralin and Anita Walz
Special thanks for event support: Carrie Cross, Trevor Finney, and Kayla McNabb
The University Libraries will be hosting its second Open Data Week on April 10-13 with opportunities to learn more about sharing, visualizing, finding, mining, and reusing data for research. In addition to panel discussions on open research data as well as on text and data mining, there will be two sessions on data visualization. From Tuesday through Thursday, join one or more sessions featuring guests Thomas Arrow and Stefan Kasberger from ContentMine to learn about open source tools in development for mining scholarly and research literature. ContentMine software “allows users to gather papers from many different sources, standardize the material, and process them to look up and/or search for key terms, phrases, patterns, and more.” Be sure to register for limited capacity events (Lunch on Wednesday 4/12, and the in-depth workshop on Thursday 4/13); links and full schedule below. For more information, see our Open Data Week guide, and use our hashtag, #VTODW.
Monday April 10 Open Research/Open Data Forum: Transparency, Sharing, and Reproducibility in Scholarship 6:30-8:00pm, in Torgersen Hall 1100 (NLI credit available)
Join our panelists for a discussion on challenges and opportunities related to sharing and using open data in research, including meeting funder and journal guidelines:
Daniel Chen (Ph.D. candidate in Genetics, Bioinformatics, and Computational Biology)
Karen DePauw (Vice President and Dean for Graduate Education)
Sally Morton (Dean, College of Science)
Jon Petters (Data Management Consultant, University Libraries)
David Radcliffe (English)
Laura Sands (Center for Gerontology)
Tuesday April 11 Introduction to Content Mine – Tools for Mining Scholarly Literature 9:30-10:45am, Newman Library Multipurpose Room (NLI credit available)
Join ContentMine instructors for an overview of text and data mining, and an introduction to ContentMine tools for text and data mining of scholarly and research literature.
Tuesday April 11 Data Visualization with Tableau 10:30 am -12:00 pm, Torgersen 1100 (NLI registration)
With the Tableau data visualization software, you or your students can easily turn research data into detailed, interactive visualizations that tell the story that numbers alone struggle to express. The software can link directly to your data sources so you always have the most up-to-date data on hand without exporting manually, and easily generate hundreds of types of visualizations that include interactive elements.
Wednesday April 12 Introduction to Content Mine: Tools for Mining Scholarly Literature 9:00-9:55am, Newman Library Multipurpose Room (NLI credit available)
Join ContentMine instructors for an overview of text and data mining, and an introduction to ContentMine tools for text and data mining of scholarly and research literature.
Wednesday April 12 Making Visible the Invisible: Data Visualization and Poster Design 9:30-11:00am, Newman 207A (NLI registration)
Visually representing data helps users and readers engage with the content, understand key findings, and retain information. Exploring, creating, and presenting these visual representations is becoming critical for teaching, academic research, and professional engagement. In this session we will explore the basics of data visualization and poster design, and look at a few tools to create different kinds of visualizations. We will also discuss the academic and professional value in visualizing data.
Wednesday April 12 ContentMine and Specialized Tools for Life Sciences Research 11:15-12:05pm, Newman Library Multipurpose Room (NLI credit available)
Join students in a computational biochemistry informatics class session for an introduction to ContentMine open source tools for text and data mining to explore research literature sources, with a focus on tools related to mining and exploring content for Life Sciences research (phylogeny and and visualization).
Wednesday April 12 Lunch with ContentMine guest speakers and program participants 12:30-1:30, Location TBA (Registration required; Limit: 50 participants)
Wednesday April 12 Text and Data Mining Forum 2:30-3:45pm, Newman MultiPurpose Room (NLI credit available)
Join our panelists for a discussion about opportunities and challenges related to text and data mining, with a focus on research purposes and information access. Audience questions are encouraged.
Tom Arrow (ContentMine)
Tom Ewing (College of Liberal Arts and Human Sciences, Virginia Tech)
Weiguo (Patrick) Fan (Pamplin College of Business, Virginia Tech)
Ed Fox (Computer Science, Virginia Tech)
Leanna House (Statistics, Virginia Tech)
Brent Huang (Computer Science, Virginia Tech)
Wednesday April 12 Introduction to Content Mine: Tools for Mining Scholarly Literature 4:00-5:15pm, Newman ScaleUp Classroom (101S) (NLI credit available)
Join ContentMine instructors for an overview of text and data mining, and an introduction to ContentMine tools for text and data mining of scholarly and research literature.
Thursday April 13 ContentMine Tools to Explore Scholarly Literature: A Full Day, Hands-On Workshop 9:00am – 4:00pm, Newman Library 207A (Registration required; also, NLI credit available; Coffee and Lunch provided)
During this workshop participants will: (1) ensure the software is functioning on their laptop computer, (2) participate in individual and group hands-on exercises to become more familiar with ContentMine tools, and (3) have the opportunity to experiment with using ContentMine tools with ContentMine instructors’ support – to mine scholarly literature and explore results specific to their own research project goals. Prior to the workshop, attendees will receive instructions to download software and make any other preparations to get the most of of the workshop.
As part of Open Access Week, the University Libraries and the Graduate School offered two travel scholarships to OpenCon 2016, a conference for early career researchers on open access, open data, and open educational resources. This is the third year we have jointly supported graduate student travel to the conference. From a pool of many strong essay applications, we chose Mayra Artiles, a Ph.D. candidate in Engineering Education, and Daniel Chen, a Ph.D. candidate in Genetics, Bioinformatics, and Computational Biology. In addition, Mohammed Seyam, a Ph.D. candidate in Computer Science, attended. All were in Washington, D.C. for the conference November 12-14, and sent the reports below. Be sure to check out the OpenCon 2016 highlights.
Mayra Artiles writes:
Being as open as possible – OpenCon 2016
This year I had the opportunity to attend OpenCon 2016 in Washington, DC. When I initially applied for the scholarship, I had a vague idea of how the Open agenda tied into my research and why was it important to me. However, I was not prepared for what the conference would spark. While in the US Open is mainly focused on open access to journals, the global idea of open is as diverse as are our problems. Interacting with people from different parts of the globe, who were amazingly passionate about Open in general, I learned that open access to journal articles is relatively a first world problem. While some countries fight for journal access, many more fight for textbooks and others fight for reliable internet. The more people I met, the more I learned how all of these unique issues are all nested under the large umbrella of making knowledge accessible on a global scale. One of the things that came out of these conversations was my involvement in a collaboration to create OpenCon Latin America – a conference similar to the one we had all just attended but held entirely in Spanish, empowering people and spreading the Open ideal in a language spoken mainly by over 425 million people.
This made me think about the following question: How can we, as Hokies, be as open as possible with our research? While fighting the academic tenure process and breaking the paradigms of open access journals is an endeavor of huge proportions, we can take small steps on being more open every day. We need to be as open as possible and as closed as necessary. It is for this reason I have made a list of steps on how we can be open today. The best part is that all these resources are open:
Take stock of all your publications and make a list of the journals you’ve published or plan to publish in.
Visit Sherpa Romeo and look up these journals. This page will provide information on which parts of your work are shareable and whether or not there is an embargo on your work. If you’re lucky, you can share a copy of your pre-print.
Share as much as possible on repositories such as VTechWorks and other sites such as ResearchGate.
Create your impact story at ImpactStory – all you need is an ORCID profile. Our work should mean more than amount of times we get cited. This website shows just that: it will give you a score for how ‘open’ is your work, show how many people saved, shared, tweeted, and cited your work and across how many channels, among other great things. As researchers, we are more than our H-index.
Have a conversation with your research peers and advisors on the value of open research. While we can’t convince everybody to suddenly publish in open access, we can begin the conversation and break the paradigms. A great resource to learn more about the value of open research is Why Open Research?
Daniel Chen writes:
What is “open”? Merriam-Webster tells us that it is “having no enclosing or confining barrier: accessible on all or nearly all sides”. For OpenCon, access (to academic publications), education, and data lay at the center of its mission.
The conference brings together a select group of like-minded individuals who are all passionate towards openness. Since the conference was single-tracked, it allowed everyone to focus on the various projects, hurdles, and conversations people have about Open around the world. We had plenty of time and space to roam around American University to continue conversations. I was lucky and privileged enough to be one of the select attendees and represent Virginia Tech.
My road to Open revolves mainly though open education and open data. I teach for Software Carpentry and Data Carpentry and support NumFOCUS. It is logical then, that my definition of Open mainly focuses around open source scientific computing. It’s a very specific subset of Open, and OpenCon helped me remember what role I play in the the larger Open movement.
For me Open Education is teaching the Creative Commons-licensed Software Carpentry material the past 3 years. Over the years, my idea of open education revolved around higher education: textbooks for university students, scientific computing materials for graduate students, resources for open source. I was reminded that open education was not just for the graduate students trying to improve the quality of their research, textbooks and educational materials were not just for university students. Open education is used to teach students from all ages, lesson materials and books for elementary school, textbooks for middle school, high school, and university. It allows students and educators to invest resources in other ways to help foster better learning. Here at Virginia Tech, you may notice OpenStax books in the library, but the Rebus Community is another resource and place to get involved with open education materials.
As a data scientist, I am constantly combining disparate datasets from a myriad of sources to answer a research question. I rely heavily on open data sets. Many cities in the United States now have open data portals (e.g., NYC Open Data), and government agencies, such as the Department of Commerce house a plethora of open datasets. These datasets are great for an analyst such as myself, but open data sources such as OpenStreetMap and ClinicalTrials.gov help with urban planning in cities and provide drug trial data and results to people all over the world.
One of my favorite parts of the conference happened on the second day when we shifted from a single-track conference to an un-conference style meeting. Attendees from the conference pitched various discussion topics, and the attendees of the conference dispersed across the American University Law School. I attended a discussion about openness in academia where we talked how we incorporate it in our academic lives. For some of us (including myself), we are lucky that our advisors understand openness. Most, if not all, of my research code has a MIT Open Source License. Others found the challenge of pushing and fighting for ‘openness’ a way of disrupting the traditional ivory tower philosophy. One attendee was an undergraduate freshman who was trying to understand what openness was and how he can incorporate it as he begins his academic career. This was a great metaphor for what OpenCon stands for, empowering and pushing openness to the next generation.
I also attended the breakout discussion about global health, where we talked about how openness plays a role in improving global health. I met many people who work in the health space, and use open data and open access sources to improve health. For example, Daniel Mietchen from the NIH is part of a global infectious disease response team to build the tools and protocols necessary to respond to the next epidemic. The 2014 Ebola and 2015 Zika outbreaks are recent reminders of how much we can improve our global response to infectious disease outbreaks. In this unconference, we also talked about drug results reporting in at ClinicalTrials.gov. The problem is that even though clinical trials are listed there, not all of the results from the trials are reported after the initial trial listing. This takes away the ability for people looking to educate themselves about various treatment options for a disease, and more pressure is needed to make sure this information is adequately distributed in a timely manner.
Our final day at the conference had everyone in the conference work in groups to talk to various funding agencies and senators about openness. Essentially, we became lobbyists for Open. I was lucky enough to be in two groups. My first group talked with Rachael Florence, PhD, the Program Director of the Research Infrastructure program at the Patient-Centered Outcomes Research Institute (PCORI). We talked about how PCORI’s goal is to make study results and data more widely available, brought up the concerns about disseminating clinical trials results, and generally discussed faster reporting, lowering publication bias, reproducible research, and data sharing. We also talked about what OpenCon was, and intrigued Dr. Florence to attend next year.
My next stop was the office of Virginia Senator Mark Warner. We did not get to talk to him directly, but instead talked to his senior Policy Advisor, Kenneth Johnson, Jr. It was during this discussion that I wished we had more training on being an effective lobbyist. We only make 2 passes around the circle during our meeting. The first was introducing ourselves, and the second was how Open played a role in our lives. There was a small conversation about open data, open access, and open education for the state of Virginia, but I wished we were able to have a longer conversation. Senator Warner is already familiar with many aspects of Open, so not too much convincing was needed, but I worried about how other groups fared.
In the end, I felt OpenCon was a great experience. I made new connections with other people from all over the world, and gained new experiences on how to talk about Open. It has also given me some ideas for a side project about using ClinicalTrials.gov data to reporting rates for various clinical trials. I hope I am lucky enough next year to attend as well, and urge everyone at Virginia Tech to learn about Open, and get involved!
The Open Science Prize, encourages experimentation with open content and open data to enable discoveries that improve health and push research forward. Six finalist projects address: FDA Trials; Emerging diseases; Mental and neurological disease modeling; Open Neuroimaging data; Rare disease research; and Global air quality.
The Wellcome Trust, the US National Institutes of Health (NIH), and the Howard Hughes Medical Institute have sponsored this award, “to stimulate the development of novel and ground-breaking tools and platforms to enable the reuse and repurposing of open digital research objects relevant to biomedical or health applications.” Further details about the contest are described in the Open Science Prize FAQ and in this Open Science Prize Vision and Overview from the BD2K Open Data Science Symposium, #BD2KOpenSci.
Fruit Fly Brain Observatory – Pools global laboratory data to facilitate the complex scientific collaboration necessary to advance computational disease models for mental and neurological diseases by connecting data related to the fly brain.
Lulu has announced the launch of a new online publishing platform that it is calling Glasstree. If you’ve heard of Lulu before you probably know it as one of several heavyweight players in the self-publishing arena, alongside Amazon (Kindle Direct), Apple (iBooks Author), and iUniverse. What makes the Glasstree announcement intriguing is that Lulu is explicitly setting its sights on “academic and scholarly authors and communities.” In other words, Lulu wants to be a scholarly book publisher.
What are the chances that Lulu’s experiment will succeed? At first glance, it sure seems unlikely. As popular as self-publishing has become (DIY titles account for over 40% of all trade eBook sales), any impact it has had on the academy has thus far been modest. After all, one of the bedrock principles of scholarly publishing is gatekeeping (i.e. letting in the good; keeping out the bad), a principle that seems fundamentally at odds with the self-publishing tenets of fast, easy, and low-cost. Indeed, DIY publishing companies pride themselves on minimizing the barriers to publication—surely a sign that Lulu faces an uphill battle. And yet, a closer look at the Glasstree website suggests that Lulu has a strategy that is at least worth watching.
To its credit, Lulu doesn’t hide its intentions. Visitors to the Glasstree home page are immediately greeted with a barrage of not-so-subtle one-liners aimed squarely at appealing to scholarly authors:
PUBLISH AND PROSPER
Glasstree Returns Control to Academic Authors
Experience Scholarly Publishing in a Whole New Way
A Better Publication Model for Academic Authors
What author doesn’t want more control over the publishing process or, for that matter, a chance to publish and prosper? You’ve certainly got my attention. Then comes the real sales pitch:
The existing academic publishing model is broken, with traditional commercial publishers charging excessive prices for books or ridiculous book publishing charges to publish Open Access books.
The give-away here is the mention of “traditional commercial publishers,” an obvious reference to the handful of conglomerate publishers that now control a sizable share of the academic monograph market—publishers including Elsevier, Springer, Wiley, and Taylor & Francis, which together churn out thousands of monographs each year at list prices that routinely exceed $100 per volume. Indeed, as one reads on it becomes clear that Lulu is appealing not so much to scholars working on their first (i.e. tenure) book but to experienced scholars; specifically, experienced scholars who have published previously with a commercial academic press and who feel burned by the experience. The following paragraphs reel off a familiar litany of complaints that one might hear outside the book exhibit hall of pretty much any scholarly conference:
Academics or their supporting institutions are poorly paid for their content. Profit margins are strongly skewed towards the publisher, with crumbs for the author and/or their employers. Submission to publication times are far too lengthy and service and marketing support insufficient.
Besides the lack of editorial assistance, marketing support, and a complete absence of urgency, traditional academic publishers are now often viewed as cherishing profits over the advancement of knowledge, and accommodate their shareholders over their authors.
Some of these complaints surely could be leveled against university presses, but the real target here is obviously commercial publishers, viz. the presses that cherish profits over advancement of knowledge while accommodating the interests of shareholders over authors. Indeed, it is this resentment-stoking aspect of Glasstree’s appeal that surely has a chance of resonating with a specific subset of authors—those both inside and outside the academy who are not subject to the pressures of tenure and promotion and therefore can afford to publish their books wherever they want. While it is hard to imagine most research universities taking a Glasstree book seriously for tenure, I can certainly see established scholars, particularly productive ones who are no longer in need of a monograph for promotion, using a service like Glasstree to publish “labor of love” books or books that grow out of side projects that wouldn’t count anyway toward career advancement—or simply books that no university press will take on. In short, Glasstree could be an attractive outlet for any number of books that typically would go to commercial academic publishers more so than university presses.
Of course, some will argue that commercial academic publishers, despite their faults, still employ peer review. It may not be as rigorous or as consistent as the peer review done by university presses, but it is certainly more than what one gets from a self-publishing company. But this is where Lulu’s plans for Glasstree really get interesting. According to the Glasstree website, Lulu is also launching Glassleaf Academic Services, which offers “peer review, all forms of editing, illustration and design, translation and professional marketing services. These services are designed for the academic community and are offered at affordable prices.” Lulu does this by offering tiered service packages (1-Star, 2-Star, & 3 Star) that start “as low as $2,625” and can go up over $8,000. Books can then be published in a variety of formats—both softcover and hardcover as well as eBooks, including Open Access eBooks.
It is unclear who will be doing all of this work but it seems that Lulu actually plans to hire living and breathing people—Content Project Managers—to at least oversee some form of peer review, copyediting, design, and marketing, even if they have some way of automating the work to exploit economies of scale. Here’s what the website specifically says about peer review:
Peer Review: Strengthening Your Content
This service is designed to save you time and effort in gathering peer reviews of your work. A Glassleaf Content Project Manager will manage the entire peer review process and consolidate feedback for you. Your Content Project Manager will compose a questionnaire and share it with you for review prior to distributing it with your content. The number of reviewers will vary according to discipline and your preference.
After the review process is complete, your Content Project Manager will provide you with the actual peer reviews and, in a summary report, will highlight significant and consistent commentary from your peers’ comments. After the report is compiled, you will meet with your Content Project Manager to review the summary of the reviewer’s commentary.
It is also worth noting that Glassleaf plans to offer 3 types of peer review: open, single blind, and double blind. Authors will be responsible for paying reviewer fees although the Content Production Manager will “negotiate the lowest possible fees on the author’s behalf.”
Once again, I want to reiterate my overall skepticism that this type of DIY publishing will have a serious impact, at least for now, on scholarly monograph publishing as it interlocks with the current T&P system. In this, university presses still have a unique role to play. Still, one can’t help but wonder if Lulu isn’t onto something. Might they have found a sweet spot between the two endpoints of the scholarly publishing spectrum, non-profit university presses on the one end and commercial publishers on the other? The missing piece for self-publishing companies like Lulu has always been quality control, but as the quality of commercially published books continues to fall and price tags continue to rise, the Glasstree model has some definite advantages. Even the pay-for-services aspect doesn’t seem so foreign now that various proposals are being considered for subvention-funded (i.e. pay-to-publish) OA monographs. Perhaps the emergence of companies like Glasstree will force us, at last, to get a grip on what it costs to produce scholarly books and, more importantly, find ways to actually drive down those costs.
No matter how you look at it, the once-staid world of scholarly publishing is getting messier and messier. And it’s only going to get more so. According to the Glasstree website, Lulu has its sights set on more than just books:
Glasstree, in its initial phase, will publish books—monographs, thesis, series, serials, textbooks, etc. (both soft and hardcover, with a range of paper types, binding types, etc.), and eBooks (including Open Access eBooks). Future phases will focus on article based publishing, journals, conference proceedings, data sets, etc.
We all need to brace ourselves for what lies ahead.
Virginia Tech’s fifth observance of Open Access Week took place October 24-28 with seven events, featuring a panel discussion and talks from two visiting speakers.
The first event of the week featured Brian Hole, who is CEO of Ubiquity Press. An archaeologist by training, he had experienced lack of journal access in places like India. Ubiquity Press was begun to provide a good quality, low cost open publishing platform that would be inclusive of the developing world. While the platform does operate using the sometimes controversial APC model, the costs are low ($400 standard, but can be lower depending on services provided) and are often covered by libraries so there is no cost to author or reader. Ubiquity is also involved in publishing books as well as journals for open research software, several for open data, citizen science, and an upcoming open hardware journal. The platform offers an open peer review option, which four journals have implemented. Currently publishing 42 journals, its platform will be open source, and is itself a fork of the Open Journal Systems open source code. It’s an impressive platform and openness is at its core.
On Monday evening, the forum “For the Public Good: Research Impact and the Promise of Open Access” was held, hosted by Peter Potter (Director of Publishing Strategy, University Libraries) and featuring panelists from a variety of perspectives: graduate students Siddhartha Roy (Civil and Environmental Engineering) and Mohammed Seyam (Computer Science) as well as Montasir Abbas (faculty, Civil and Environmental Engineering), Karen DePauw (Dean of the Graduate School), and Brian Hole (Ubiquity Press). The conversation was wide-ranging, covering pre-prints, publishing costs, metrics, and peer review. Other topics included the importance of open licensing for reuse of scholarly material and the role of openness for a public land-grant university. Faculty open access mandates were briefly addressed, with comments focusing on saving faculty time and showing benefits. Transparency of data and code were a theme, as well as the possibility of researching completely in the open. See the video below for the full forum (and here are Peter’s introductory slides).
In the session “Where Can I Post My Publications?” Ginny Pannabecker and I covered the landscape for article archiving, including research networking sites, researcher profiles, disciplinary and institutional repositories, and personal and departmental websites. It’s important to know about journal permissions, which sites can host research as opposed to linking to it, and about limits to sharing and preservation on proprietary platforms. We got great feedback on this session, and one faculty member signed up for an ORCID identifier and used the new EFARS system to deposit scholarship to the VTechWorks faculty collection.
We were very pleased that our librarian exchange with the Cape Peninsula University of Technology in Cape Town, South Africa coincided with Open Access Week, since Veliswa Tshetsha focuses on scholarly communication there. Her presentation Access to Research in South Africa gave an overview of open access initiatives in that country as well as on the continent. Recently CPUT signed the Berlin Declaration on Open Access, joining 45 other African universities. The main funding body, the National Research Foundation, like funding bodies in the U.S., is requiring article archiving, supporting article processing charges, and developing a policy on data archiving. Paywalls are only one of the problems contributing to what she referred to as an African “access drought.” Others include telecommunications access, high APC charges in some open access journals, embargoes, and researchers submitting to open access journals with little or no peer review.
The week ended with two sessions regularly offered by the Libraries. In “Scholarly Publishing Trends” I covered a lot of ground, from open science to peer review to ORCID, and Gail McMillan introduced attendees to our Open Access Subvention Fund and its guidelines.
Beyond our own events, there were other developments of note:
Few areas in scholarly publishing are undergoing the kind of examination and change that peer review is currently undergoing. Healthy debates continue on different models of peer review, incentivizing peer reviewers, and various shades of open peer review, among many other issues. Recently, the second annual Peer Review Week was held, with several webinars available to view.
Since peer review is currently such a dynamic topic, the University Libraries and the Department of Communication are especially pleased to host a talk about peer review in science by Dr. Malte Elson of Ruhr University Bochum. Dr. Elson is a behavioral psychologist with a strong interest in meta-science issues. Dr. Elson has created some innovative outreach projects related to open science, including FlexibleMeasures.com, a site that aggregates flexible and unstandardized uses of outcome measures in research, and JournalReviewer.org (in collaboration with Dr. James Ivory in Virginia Tech’s Department of Communication), a site that aggregates information about journal peer review processes. He is also a co-founder of the Society for Improvement of Psychological Science, which held its first annual conference in Charlottesville in June. Details and a description of his talk, which is open to the public, are below. Please join us! (For faculty desiring NLI credit, please register.)
Wednesday, October 12, 2016, 4:00 pm
Newman Library 207A
Is Peer Review a Good Quality Management System for Science?
Through peer review, the gold standard of quality assessment in scientific publishing, peers have reciprocal influence on academic career trajectories, and on the production and dissemination of knowledge. Considering its importance, it can be quite sobering to assess how little is known about peer review’s effectiveness. Other than being a widely used cachet of credibility, there appears to be a lack of precision in the description of its aims and purpose, and how they can be best achieved.
Conversely, what we do know about peer review suggests that it does not work well: Across disciplines, there is little agreement between reviewers on the quality of submissions. Theoretical fallacies and grievous methodological issues in submissions are frequently not identified. Further, there are characteristics other than scientific merit that can increase the chance of a recommendation to publish, e.g. conformity of results to popular paradigms, or statistical significance.
This talk proposes an empirical approach to peer review, aimed at making evaluation procedures in scientific publishing evidence-based. It will outline ideas for a programmatic study of all parties (authors, reviewers, editors) and materials (manuscripts, evaluations, review systems) involved to ensure that peer review becomes a fair process, rooted in science, to assess and improve research quality.
Virginia Tech Libraries and the Pamplin College of Business are pleased to announce publication of Fundamentals of Business, a full color, 440+ page free online textbook for Virginia Tech’s Foundations of Business course. This Virginia Tech course averages 14 sections with over 700 students in Fall semesters. The textbook is an open educational resource, and may be customized and redistributed non-commercially with attribution.
The book is the work of Prof. Stephen Skripak and his team of faculty colleagues from the Pamplin College of Business, Anastasia Cortes and Richard Parsons, open education librarian Anita Walz, graphic designers Brian Craig and Trevor Finney, and student peer reviewers Jonathan De Pena, Nina Lindsay, and Sachi Soni. Assistive Technologies consulted on the accessibility of the textbook.
The first openly licensed book of its kind created at Virginia Tech, the book is in direct response to two problems faced by Pamplin’s team of professors: a used edition of the previous textbook was priced as high as $215, and students were not engaged by the previous text.
Skripak and his colleagues started with an openly licensed textbook created in 2011 (licensed with a Creative Commons Attribution-NonCommercial-ShareAlike license) which legally allows modification and non-commercial redistribution with attribution. The team significantly updated, redesigned, and contributed new content to create a learning resource that fits course learning objectives and reduces student textbook costs for this course to zero. Through a grant from the University Libraries, three students were hired to peer review drafts of the text. The team worked together through details of updating data, designing new figures, and ensuring web and print-on-demand ready layout. The resulting work, Fundamentals of Business, is licensed with a Creative Commons Attribution-NonCommercial-ShareAlike license.
In addition to faculty members’ ability to customize content and resolution of student affordability issues, the availability of an openly licensed text in common, editable file format bodes well for faculty at other institutions seeking to leverage academic freedom in support of student learning and affordability. The book is representative of a larger movement to empower faculty to freely adopt, adapt, and author a myriad of course types. We hope that many other institutions will take advantage of the opportunity to adopt, adapt or remix the book to fit their needs.
Although I am familiar with copyright and licensing agreements for journal articles, I am less familiar with book publishing agreements. Rights reversion for books was a new concept to me, so the first guide published by the Authors Alliance had my attention right away (the group has since published a second guide, Understanding Open Access). This guide is intended for authors who, for whatever reason, may wish to reclaim rights to their books– rights that they transferred to their publishers when they signed a publishing agreement. It’s the result of “extensive interviews with authors, publishers, and literary agents who shared their perspectives on reverting rights, the author-publisher relationship, and keeping books available in today’s publishing environment.” The guide follows an “if-then” organization, referring readers to specific chapters depending on their situation, though I read it straight through (full disclosure: I’m an Authors Alliance supporter).
Early on, the authors define rights reversion and its availability:
“a right of reversion is a contractual provision that permits an author to regain some or all of the rights in her book from her publisher when the conditions specified in the provision are met… in practice, an author may be able to obtain a reversion of rights even if she has not met the conditions stipulated in her contract or does not have a reversion clause.” (p. 6-7)
This guide is intended for authors with publishing agreements already in place; it is not a guide to negotiating contracts (though it may inspire authors to examine the details of rights reversion clauses in new contracts).
The authors note that rights reversion becomes an issue for academic authors especially when their books fall out of print, sales drop, or their publishers stop marketing their books. In such instances, authors may wish to reclaim their rights (so that they can find another publisher to reissue the book or perhaps deposit the book in an open access online repository) but they find themselves constrained by the terms of their publishing agreements or they may not understand how how to go about reclaiming their rights. With these concerns in mind, the Authors Alliance “created this guide to help authors regain rights from their publishers or otherwise get the permission they need to make their books available in the ways they want.”
An important first step in the process is for authors to learn about different ways that they might increase their books’ availability (for example, electronic, audio, and braille versions as well as translations). Next, the guide helps authors determine if they have transferred to their publishers the rights necessary to make their books available in the ways they want. Older contracts may be ambiguous regarding e-book versions; the guide advises authors on how to negotiate the ambiguity. An additional consideration is that permissions for usage of third-party content may no longer be in effect.
Some examples of reversion clauses are provided in chapter 4, pointing out triggering conditions (such as out of print, sales below a certain threshold, or a term of years), written notice requirements, and timelines. It’s important to understand how the triggering conditions are defined, as well as how to determine whether they have been met, and the authors provide good suggestions for finding this information.
The publisher’s plans for the book should be discovered, and the guide emphasizes reasonable, professional conversations with publishers. The success stories throughout the book are particularly valuable in this respect.
Chapter 6 details how to proceed if a book contract does not include a rights reversion clause:
“Ultimately, whether a publisher decides to revert rights typically depends on the book’s age, sales, revenue, and market size, as well as the publisher’s relationship with the author and the manner in which the author presents his request.” (p. 77)
Before requesting reversion, an author should have a plan in place, review all royalty statements, and discover the publisher’s plans for the book. Being reasonable, flexible, creative, and persistent are the golden rules for negotations with a publisher. Precedents can be persuasive, so inquire with friends and colleagues who are authors. If electronic access is important, be aware that many publishers are actively digitizing their backfiles. In this respect, an author might draw a publisher’s attention to the increasing evidence that open access versions don’t harm sales, and can sometimes increase them as a result of improved discovery.
Thanks to Peter Potter, Director of Publishing Strategy at the University Libraries, for his feedback on this blog post (contact him if you have questions about book publishing- he has a wealth of experience). Thanks also to the Open Library for the cover image.