Virginia Tech is one of nine founding members of the Open Textbook Network Publishing Cooperative, a pilot program focused on publishing new, openly licensed textbooks. The program was launched by the Open Textbook Network (OTN) and aims to increase open textbook publishing experience in higher education institutions by training a designated project manager at each institution and creating a network of institutions.
The Cooperative is a three-year pilot that will establish publishing workflow and processes to expand the development of open textbook publishing in higher education. As a member, Virginia Tech’s project managers, Corinne Guimont (Digital Publishing Specialist) and Anita Walz (Open Education, Copyright and Scholarly Communications Librarian), will gain expertise in project management and technical skills. After the training is complete, a minimum of three open textbooks will be published using the model and tools gained through the cooperative.
“We at Virginia Tech are excited to join the Co-Op because of the opportunity for learning and professional development within a cohort of other institutions,” said Anita Walz. “We will have access to additional technical expertise, workflows, and tools, so that we can create and share more open textbooks with the world.”
Virginia Tech’s involvement in the Publishing Cooperative builds upon previous open textbooks published in the library, including Fundamentals of Business by Stephen J. Skripak and newly released Beta Version of Electromagnetics by Steven W. Ellingson. These books along with other open educational resources adopted by faculty at Virginia Tech have saved 3,111 students $786,398 in course material costs.
At the completion of the three-year pilot, the Publishing Cooperative as a whole will publish at least two dozen new, freely available, textbooks with Creative Commons Attribution (CC BY) licenses.
Founding members of the OTN Publishing Cooperativeinclude: Miami University, Penn State University, Portland State University, Southern Utah University, University of Cincinnati, University of Connecticut, University of North Carolina at Greensboro, Virginia Tech, and West Hills Community College District (CA).
About the Open Textbook Network: The Open Textbook Network (OTN) is a community working to improve education through open education, with members representing over 600 higher education institutions. OTN institutions have saved students more than $8.5 million by implementing open education programs, and empowered faculty with the flexibility to customize course content to meet students’ learning needs.
VT Publishing and the University Libraries of Virginia Tech are pleased to announce publication of a second, new open textbook: Electromagnetics Volume 1 (Beta). (You can read about the first open textbook in an earlier blog post.) The textbook is in “beta” for a Virginia Tech course in Spring 2018 and will be revised and re-released with LaTeX source code, problem sets, and solutions in Summer 2018.
Electromagnetics Volume 1 (Beta) by Steven W. Ellingson is a 224 page, freely available, peer-reviewed, full color, print and digital open educational resource. It is intended to serve as a primary textbook for a one-semester first course in undergraduate engineering electromagnetics within the third year of a bachelor of science degree program.
The book is the work of Steven W. Ellingson, Associate Professor in the Department of Electrical and Computer Engineering at Virginia Tech, in collaboration with the Scholarly Communication office of Virginia Tech’s University Libraries and VT Publishing. As collaborators with Ellingson, the University Libraries provided grant funding, overall project management, guidance on open licensing, attribution, student works, formats and styles, managed development and production processes, coordinated peer review, reviewed manuscripts (editorial and technical), provided technical specifications, and navigated print and distribution solutions.
A no cost downloadable version of Electromagnetics Volume 1 (Beta) is available here. A full color softcover printed version (ISBN: 978-0-9979201-2-3) is available at the cost of production and shipping from Amazon.com.
The LaTeX authored text includes extensive use of mathematical equations, figures, adapted, and custom-created and openly licensed diagrams, worked and narrative examples. This book employs the “transmission lines first” approach, one of three approaches to teaching electromagnetics. However, other teaching approaches may find the work relevant, since its release under a Creative Commons Attribution-ShareAlike 4.0 license legally allows addition, adaptation (with required attribution), and redistribution of content. The resulting work is of significant value in opening new possibilities for teaching and learning: Electromagnetics Volume 1 (Beta) by Ellingson is the first known openly licensed textbook for electromagnetics.
Electromagnetics Volume 1 (Beta) will be field tested in Virginia Tech’s ECE3106 Electromagnetic Fields course in Spring 2018, and then revised and re-released in Summer 2018 as a text adoptable for courses beyond Virginia Tech. LaTeX source files, problem sets, and solutions will be released contemporaneously in Summer 2018. The editor and author of this book encourage feedback from individuals, classes, and faculty viewing this book. Feedback and suggestions may be contributed using the online annotation tool Hypothes.is, via a feedback form, or by emailing email@example.com. A Volume 2 and combined Volumes 1 & 2 are planned.
The intent of creating a remixable book for both internal use and free public release exemplifies trends within the Open Education movement and in higher education in general. These trends mirror aspects of the open source software movement and include public sharing under open licenses which allow contributions and adaptation (such as Creative Commons licenses), rewarding, valuing, and incorporating new ideas pertaining to teaching and learning, building collaborative faculty networks across multiple institutions, giving credit where due, and involving students as active contributors to course goals and/or the work of curricula and course design. Free public access is a start, but the possibilities for faculty, students, and others to create, openly license and share, freely adopt, adapt with attribution, and build open source systems for adaptation and sharing expand meaningful possibilities far beyond free access. These freedoms bode well for expansion of purposeful and engaging teaching and learning, the ability to leverage academic freedoms for broad, positive impacts on the common good, thoughtful conversations about ethics, incentives, voice, and access in the academy, and the advancement of innovative pedagogical practices and publicly available scholarly research which are already beginning to bear fruit.
I hope that many other faculty and institutions will take advantage of opportunities to create, adopt, adapt, and share openly licensed materials to fit their needs, the distinctive teaching and learning challenges and opportunities in their disciplines, the needs of their students, and beyond.
This textbook is part of the Open Electromagnetics Project led by Steven W. Ellingson at Virginia Tech. The goal of the project is to create no-cost openly-licensed content for courses in undergraduate engineering electromagnetics. The project is motivated by two things: lowering learning material costs for students and giving faculty the freedom to adopt, modify, and improve their educational resources.
Publication of this book was made possible in part by the Virginia Tech University Libraries’ Open Education Faculty Initiative Grant program which is led by Anita Walz, Scholarly Communication office, at the University Libraries, Virginia Tech. The goal of the grants program is to to encourage the use, creation, and adaptation of openly licensed information resources to support student learning. The author also thanks VT Publishing colleagues for their many contributions.
About the author of Electromagnetics Volume 1 Beta: Steven W. Ellingson is an Associate Professor at Virginia Tech in Blacksburg, Virginia, United States. He received PhD and MS degrees in Electrical Engineering from The Ohio State University and a BS in Electrical & Computer Engineering from Clarkson University. He was employed by the US Army, Booz-Allen & Hamilton, Raytheon, and The Ohio State University ElectroScience Laboratory before joining the faculty of Virginia Tech, where he teaches courses in electromagnetics, radio frequency systems, wireless communications, and signal processing. His research includes topics in wireless communications, radio science, and radio frequency instrumentation. Professor Ellingson serves as a consultant to industry and government and is the author of Radio Systems Engineering (Cambridge University Press, 2016).
As part of Open Access Week, the University Libraries and the Graduate School offered a travel scholarship to OpenCon 2017, a conference for early career researchers on open access, open data, and open educational resources. From a pool of many strong essay applications, we chose Alexis Villacis, a Ph.D. student in Agricultural and Applied Economics. Alexis attended the conference in Berlin, Germany on November 11-13, and sent the report below. Be sure to check out the OpenCon 2017 highlights.
Alexis Villacis writes:
The progress of science and access to education varies widely geographically, and sometimes are very limited due to economic, cultural and social circumstances. Open Access, Open Education, and Open Data are key to support those who are left behind and bring empowerment to the next generation. OpenCon brings together the worldwide champions who are working towards the advancement of the Open Movement. Students, early career academic professionals, and senior researchers all come together under one roof to share their initiatives. Participants hear their inspiring stories, from Canada to Nepal, of sparking change during a three-day conference; a conference I am grateful to have had the opportunity to be part of, as a representative of Virginia Tech.
Over these three days, participants showcased how Open is being advanced around the world. The discussion centered on how often higher education models (knowledge access, research questions, and research funding, among many others) marginalize underrepresented scholars and students. It was thought-provoking and sometimes shocking to hear how our western ways of knowing have colonized access to information and how this has impacted the progress of R&D in other parts of the world.
Sharing with participants from other countries and hearing the challenges they face every day made me contrast our everyday realities and the privilege we have at VT. A privilege we take for granted in our everyday lives, where access to all types of tools, research, and content is one click away through our computers. We, as an institution of higher education, promote and share access to knowledge and new technologies throughout Virginia and beyond. The impact of these transfers is what keeps our society thriving every day, but where would we be if this access were restricted to us? Perhaps, VT as a Land Grant Institution would not exist at all, the state of Virginia would not be what it is today and neither many other parts of the US.
As I walked through the halls of the Max Planck Society, where the conference was held, I kept wondering: is this not what we are doing today? What changes are we withholding from the rest of the world by limiting access to data, knowledge, and education? The essence of this and the significance of Open Access clearly goes beyond journals and data, and it is also about social justice, equity, and the democratization of knowledge. We Hokies can make a difference in Open Access. More importantly, we are the key players called to work towards its advancement.
As research becomes increasingly digital, it’s becoming more important to ensure a findable and unchanging scholarly record. Researchers are probably familiar with the digital object identifier (DOI), which in URL form provides a persistent link to articles (and more), and libraries and publishers provide redundant archiving to ensure scholarship is preserved for the long term.
However, it’s also important to make sure links (other than DOIs) in articles work, and to make sure web pages represent what the author saw when she cited them. Some journals are aware of these issues, and I’ve noticed a few authors who employ URLs from the Internet Archive’s Wayback Machine in their manuscripts. Thinking back to my last article, there are probably some links that won’t work five (or twenty) years from now, or links resolving to web pages that won’t accurately represent what I was referring to at the time.
To help address this problem, Virginia Tech’s University Libraries is pleased to announce that we are now a registrar for Perma.cc, a service to provide archiving of web pages for research purposes. Researchers at Virginia Tech will be able to archive, manage, and annotate an unlimited number of web pages with persistent shortlinks for citing, and will also receive local support.
Including a Perma.cc link in a citation or footnote may depend on the citation style you are using, but a general recommendation is to include the original URL, followed by “archived at” and the Perma.cc shortlink, for example:
If you click on the Perma.cc link above, you can see how the web page looks in archived form. In addition, the time of capture is recorded, there’s a link to the live page, and you can download the archive file (under “show record details”). Perma.cc is intended for non-commercial scholarly and research purposes that do not infringe or violate anyone’s copyright or other rights. Web pages to be archived should be freely available without payment or registration. Additionally, some web pages employ a “noarchive” restriction, which Perma.cc archives but makes private. In other words, the shortlink can be shared, but is available only to the researcher and upon request.
There are some advantages to using Perma.cc over the Wayback Machine (and the Internet Archive is a supporting partner of Perma.cc). Perma.cc provides a more thorough, accurate capture in two forms, a web archive file (WARC), and a screenshot (PNG). Perma.cc also provides persistent shortlinks that are more convenient for citing, and enables researcher management of the links (with folders, annotation, and public/private control).
Other features of the Perma.cc system include:
Researchers will be added as organizations, and can add other users within that organization, such as lab members or collaborators
To get started, send an email to firstname.lastname@example.org and request an account (or to be added as an unlimited user if you’ve already signed up). You can also send questions and problems to this address, or you can use the Perma.cc contact form.
Inspired by the Association of Southeastern Regional Libraries webinar, “Adding Patent Records to Clemson’s IR — Highlighting the University’s Output,” VTechWorks, Virginia Tech’s institutional repository, now offers a similar collection, Virginia Tech Patents. The collection contains 645 U.S. Patents assigned to Virginia Tech at the time of patent application. The dates of issuance span 1919-2016. The collection’s display is customized with fields, search filters, and facets particular to patents, such as patent type, inventor, assignee, patent and application numbers, and patent classifications. Our motivation for creating the collection was that a sizeable collection of useful public domain content could be harvested programmatically and that it provides an opportunity to spotlight how Virginia Tech “invents the future.”
To enable other repositories to develop a similar collection, we offer our software, Patent-Harvest, in a GitHub repository. Patent-Harvest contains a Java program written to harvest all patents with Virginia Tech as the assignee. It can be adapted to harvest patents and associated files for other organizations or search parameters.
The harvesting program uses the PatentsView API to retrieve relevant metadata for all Virginia Tech patents and outputs a CSV spreadsheet. If desired, all the corresponding files for each patent are also downloaded and logically renamed. Since most United States patent documents are image-only PDFs, a script is included that uses optical character recognition to read text content and embed it in the patent documents. This makes the text of the patent documents searchable, but doesn’t change how they appear to the reader.
Happy Open Education Week! 2017 marked the fourth year of celebrating international Open Education Week at Virginia Tech. The Open Education Week planning committee set goals to meet felt needs of faculty on campus and to encourage student communication with faculty regarding the impact of learning resources on student learning.
Cost is always an issue. The committee agreed that we wanted to do something more positive than focus on barriers to learning, so we chose the theme “The Potential of Open Education.” What is Open Education anyway? Open Education includes pedagogies, practices, and resources which reduce barriers to learning. “Open Education combines the traditions of knowledge sharing and creation with 21st century technology to create a vast pool of openly shared educational resources, while harnessing today’s collaborative spirit to develop educational approaches that are more responsive to learner’s needs.” Source: Open Education Consortium
Two faculty and graduate student oriented events featured local and invited speakers, including live and live-streamed:
Seven Platforms You Should Know About: Share, Find, Author, or Adapt Creative Commons-Licensed Resources
Thanks to Kayla McNabb for setup of the video below and Neal Henshaw for editing.
Virginia Tech Open Educational Resource (OER) authors, adapters, and authors and several students discussed the use, benefits, challenges, and opportunities related to using or adapting openly licensed course materials for couse use. Panelists included Jane Roberson-Evia (Statistics), Mary Lipscombe (Biological Sciences), Stephen Skripak (Pamplin), and Anastasia Cortez (Pamplin). Publishing expert Peter Potter (University Libraries), and students Mayra Artiles (Doctoral student, Engineering Education), and Jonathan de Pena (Senior, Finance) also joined the panel, moderated by Anita Walz (University Libraries).
Virginia Tech’s Student Government Association (SGA) designed the Open Education Week exhibit to educate and to solicit visitor input. The interactive exhibit features a range of required student learning materials including textbooks, homework access codes, software, and clickers, visual representations of data related to course material costs and student responses, information about open education options, a new Creative Commons brochure, CC stickers, and several interactive features. Students also have the opportunity to write a personalized message on an SGA-designed postcard to their professor, department head, or whomever they want to contact.
A selection of resources used in the exhibit are linked here:
Florida Virtual Campus (October 7, 2016) 2016 Student Textbook and Course Materials Survey. Available here.
National Association of College Stores (2011) “Where the New Textbook Dollar Goes” Used with Permission of NACS. (No updated data available). Available here.
Senack, Ethan. (January 2014) Fixing the Broken Textbook Market: How students respond to high textbook costs and demand alternatives. U.S. PIRG Education Fund & the Student PIRGs: Washington, DC. Available here.
Senack, E., Donoghue, R. (2016)Covering the cost: Why we can no longer afford to ignore high textbook prices. Student PIRGS: Washington, DC. Available here.
U.S. Bureau of Labor Statistics, as quoted by Popken, B. in “College Textbook Prices Have Risen 1,041 Percent Since 1977“ NBC News (August 6, 2015). Available here.
SGA also hosted a Multimedia Event. This student led engagement event featured multiple interactive stations where students could discuss, answer questions, take pictures, and write postcards. Two wordcloud prompts in particular were telling: “Where would your money go if you didn’t have to buy textbooks” — with the top two answers by far reflecting daily living expenses — “food” and “rent.”
Students were also asked to reflect on how they avoid buying full price textbooks. Responses included “Rent [textbooks],” “go without,” “hope for the best,” “borrow them from a friend,” and “buy used.”
The Open Education Week at Virginia Tech planning committee for 2017 included: Anita Walz (Chair), Kayla McNabb, Quinn Warnick, Anna Pope, Anne Brown, Kimberly Bassler, and Craig Arthur.
Exhibit curators: Virginia Tech Student Government Association: Anna Pope, Kenneth Corbett, Spencer Jones, Holly Hunter, and Sydney Thorpe with the University Libraries’: Scott Fralin and Anita Walz
Special thanks for event support: Carrie Cross, Trevor Finney, and Kayla McNabb
The University Libraries will be hosting its second Open Data Week on April 10-13 with opportunities to learn more about sharing, visualizing, finding, mining, and reusing data for research. In addition to panel discussions on open research data as well as on text and data mining, there will be two sessions on data visualization. From Tuesday through Thursday, join one or more sessions featuring guests Thomas Arrow and Stefan Kasberger from ContentMine to learn about open source tools in development for mining scholarly and research literature. ContentMine software “allows users to gather papers from many different sources, standardize the material, and process them to look up and/or search for key terms, phrases, patterns, and more.” Be sure to register for limited capacity events (Lunch on Wednesday 4/12, and the in-depth workshop on Thursday 4/13); links and full schedule below. For more information, see our Open Data Week guide, and use our hashtag, #VTODW.
Monday April 10 Open Research/Open Data Forum: Transparency, Sharing, and Reproducibility in Scholarship 6:30-8:00pm, in Torgersen Hall 1100 (NLI credit available)
Join our panelists for a discussion on challenges and opportunities related to sharing and using open data in research, including meeting funder and journal guidelines:
Daniel Chen (Ph.D. candidate in Genetics, Bioinformatics, and Computational Biology)
Karen DePauw (Vice President and Dean for Graduate Education)
Sally Morton (Dean, College of Science)
Jon Petters (Data Management Consultant, University Libraries)
David Radcliffe (English)
Laura Sands (Center for Gerontology)
Tuesday April 11 Introduction to Content Mine – Tools for Mining Scholarly Literature 9:30-10:45am, Newman Library Multipurpose Room (NLI credit available)
Join ContentMine instructors for an overview of text and data mining, and an introduction to ContentMine tools for text and data mining of scholarly and research literature.
Tuesday April 11 Data Visualization with Tableau 10:30 am -12:00 pm, Torgersen 1100 (NLI registration)
With the Tableau data visualization software, you or your students can easily turn research data into detailed, interactive visualizations that tell the story that numbers alone struggle to express. The software can link directly to your data sources so you always have the most up-to-date data on hand without exporting manually, and easily generate hundreds of types of visualizations that include interactive elements.
Wednesday April 12 Introduction to Content Mine: Tools for Mining Scholarly Literature 9:00-9:55am, Newman Library Multipurpose Room (NLI credit available)
Join ContentMine instructors for an overview of text and data mining, and an introduction to ContentMine tools for text and data mining of scholarly and research literature.
Wednesday April 12 Making Visible the Invisible: Data Visualization and Poster Design 9:30-11:00am, Newman 207A (NLI registration)
Visually representing data helps users and readers engage with the content, understand key findings, and retain information. Exploring, creating, and presenting these visual representations is becoming critical for teaching, academic research, and professional engagement. In this session we will explore the basics of data visualization and poster design, and look at a few tools to create different kinds of visualizations. We will also discuss the academic and professional value in visualizing data.
Wednesday April 12 ContentMine and Specialized Tools for Life Sciences Research 11:15-12:05pm, Newman Library Multipurpose Room (NLI credit available)
Join students in a computational biochemistry informatics class session for an introduction to ContentMine open source tools for text and data mining to explore research literature sources, with a focus on tools related to mining and exploring content for Life Sciences research (phylogeny and and visualization).
Wednesday April 12 Lunch with ContentMine guest speakers and program participants 12:30-1:30, Location TBA (Registration required; Limit: 50 participants)
Wednesday April 12 Text and Data Mining Forum 2:30-3:45pm, Newman MultiPurpose Room (NLI credit available)
Join our panelists for a discussion about opportunities and challenges related to text and data mining, with a focus on research purposes and information access. Audience questions are encouraged.
Tom Arrow (ContentMine)
Tom Ewing (College of Liberal Arts and Human Sciences, Virginia Tech)
Weiguo (Patrick) Fan (Pamplin College of Business, Virginia Tech)
Ed Fox (Computer Science, Virginia Tech)
Leanna House (Statistics, Virginia Tech)
Brent Huang (Computer Science, Virginia Tech)
Wednesday April 12 Introduction to Content Mine: Tools for Mining Scholarly Literature 4:00-5:15pm, Newman ScaleUp Classroom (101S) (NLI credit available)
Join ContentMine instructors for an overview of text and data mining, and an introduction to ContentMine tools for text and data mining of scholarly and research literature.
Thursday April 13 ContentMine Tools to Explore Scholarly Literature: A Full Day, Hands-On Workshop 9:00am – 4:00pm, Newman Library 207A (Registration required; also, NLI credit available; Coffee and Lunch provided)
During this workshop participants will: (1) ensure the software is functioning on their laptop computer, (2) participate in individual and group hands-on exercises to become more familiar with ContentMine tools, and (3) have the opportunity to experiment with using ContentMine tools with ContentMine instructors’ support – to mine scholarly literature and explore results specific to their own research project goals. Prior to the workshop, attendees will receive instructions to download software and make any other preparations to get the most of of the workshop.
As part of Open Access Week, the University Libraries and the Graduate School offered two travel scholarships to OpenCon 2016, a conference for early career researchers on open access, open data, and open educational resources. This is the third year we have jointly supported graduate student travel to the conference. From a pool of many strong essay applications, we chose Mayra Artiles, a Ph.D. candidate in Engineering Education, and Daniel Chen, a Ph.D. candidate in Genetics, Bioinformatics, and Computational Biology. In addition, Mohammed Seyam, a Ph.D. candidate in Computer Science, attended. All were in Washington, D.C. for the conference November 12-14, and sent the reports below. Be sure to check out the OpenCon 2016 highlights.
Mayra Artiles writes:
Being as open as possible – OpenCon 2016
This year I had the opportunity to attend OpenCon 2016 in Washington, DC. When I initially applied for the scholarship, I had a vague idea of how the Open agenda tied into my research and why was it important to me. However, I was not prepared for what the conference would spark. While in the US Open is mainly focused on open access to journals, the global idea of open is as diverse as are our problems. Interacting with people from different parts of the globe, who were amazingly passionate about Open in general, I learned that open access to journal articles is relatively a first world problem. While some countries fight for journal access, many more fight for textbooks and others fight for reliable internet. The more people I met, the more I learned how all of these unique issues are all nested under the large umbrella of making knowledge accessible on a global scale. One of the things that came out of these conversations was my involvement in a collaboration to create OpenCon Latin America – a conference similar to the one we had all just attended but held entirely in Spanish, empowering people and spreading the Open ideal in a language spoken mainly by over 425 million people.
This made me think about the following question: How can we, as Hokies, be as open as possible with our research? While fighting the academic tenure process and breaking the paradigms of open access journals is an endeavor of huge proportions, we can take small steps on being more open every day. We need to be as open as possible and as closed as necessary. It is for this reason I have made a list of steps on how we can be open today. The best part is that all these resources are open:
Take stock of all your publications and make a list of the journals you’ve published or plan to publish in.
Visit Sherpa Romeo and look up these journals. This page will provide information on which parts of your work are shareable and whether or not there is an embargo on your work. If you’re lucky, you can share a copy of your pre-print.
Share as much as possible on repositories such as VTechWorks and other sites such as ResearchGate.
Create your impact story at ImpactStory – all you need is an ORCID profile. Our work should mean more than amount of times we get cited. This website shows just that: it will give you a score for how ‘open’ is your work, show how many people saved, shared, tweeted, and cited your work and across how many channels, among other great things. As researchers, we are more than our H-index.
Have a conversation with your research peers and advisors on the value of open research. While we can’t convince everybody to suddenly publish in open access, we can begin the conversation and break the paradigms. A great resource to learn more about the value of open research is Why Open Research?
Daniel Chen writes:
What is “open”? Merriam-Webster tells us that it is “having no enclosing or confining barrier: accessible on all or nearly all sides”. For OpenCon, access (to academic publications), education, and data lay at the center of its mission.
The conference brings together a select group of like-minded individuals who are all passionate towards openness. Since the conference was single-tracked, it allowed everyone to focus on the various projects, hurdles, and conversations people have about Open around the world. We had plenty of time and space to roam around American University to continue conversations. I was lucky and privileged enough to be one of the select attendees and represent Virginia Tech.
My road to Open revolves mainly though open education and open data. I teach for Software Carpentry and Data Carpentry and support NumFOCUS. It is logical then, that my definition of Open mainly focuses around open source scientific computing. It’s a very specific subset of Open, and OpenCon helped me remember what role I play in the the larger Open movement.
For me Open Education is teaching the Creative Commons-licensed Software Carpentry material the past 3 years. Over the years, my idea of open education revolved around higher education: textbooks for university students, scientific computing materials for graduate students, resources for open source. I was reminded that open education was not just for the graduate students trying to improve the quality of their research, textbooks and educational materials were not just for university students. Open education is used to teach students from all ages, lesson materials and books for elementary school, textbooks for middle school, high school, and university. It allows students and educators to invest resources in other ways to help foster better learning. Here at Virginia Tech, you may notice OpenStax books in the library, but the Rebus Community is another resource and place to get involved with open education materials.
As a data scientist, I am constantly combining disparate datasets from a myriad of sources to answer a research question. I rely heavily on open data sets. Many cities in the United States now have open data portals (e.g., NYC Open Data), and government agencies, such as the Department of Commerce house a plethora of open datasets. These datasets are great for an analyst such as myself, but open data sources such as OpenStreetMap and ClinicalTrials.gov help with urban planning in cities and provide drug trial data and results to people all over the world.
One of my favorite parts of the conference happened on the second day when we shifted from a single-track conference to an un-conference style meeting. Attendees from the conference pitched various discussion topics, and the attendees of the conference dispersed across the American University Law School. I attended a discussion about openness in academia where we talked how we incorporate it in our academic lives. For some of us (including myself), we are lucky that our advisors understand openness. Most, if not all, of my research code has a MIT Open Source License. Others found the challenge of pushing and fighting for ‘openness’ a way of disrupting the traditional ivory tower philosophy. One attendee was an undergraduate freshman who was trying to understand what openness was and how he can incorporate it as he begins his academic career. This was a great metaphor for what OpenCon stands for, empowering and pushing openness to the next generation.
I also attended the breakout discussion about global health, where we talked about how openness plays a role in improving global health. I met many people who work in the health space, and use open data and open access sources to improve health. For example, Daniel Mietchen from the NIH is part of a global infectious disease response team to build the tools and protocols necessary to respond to the next epidemic. The 2014 Ebola and 2015 Zika outbreaks are recent reminders of how much we can improve our global response to infectious disease outbreaks. In this unconference, we also talked about drug results reporting in at ClinicalTrials.gov. The problem is that even though clinical trials are listed there, not all of the results from the trials are reported after the initial trial listing. This takes away the ability for people looking to educate themselves about various treatment options for a disease, and more pressure is needed to make sure this information is adequately distributed in a timely manner.
Our final day at the conference had everyone in the conference work in groups to talk to various funding agencies and senators about openness. Essentially, we became lobbyists for Open. I was lucky enough to be in two groups. My first group talked with Rachael Florence, PhD, the Program Director of the Research Infrastructure program at the Patient-Centered Outcomes Research Institute (PCORI). We talked about how PCORI’s goal is to make study results and data more widely available, brought up the concerns about disseminating clinical trials results, and generally discussed faster reporting, lowering publication bias, reproducible research, and data sharing. We also talked about what OpenCon was, and intrigued Dr. Florence to attend next year.
My next stop was the office of Virginia Senator Mark Warner. We did not get to talk to him directly, but instead talked to his senior Policy Advisor, Kenneth Johnson, Jr. It was during this discussion that I wished we had more training on being an effective lobbyist. We only make 2 passes around the circle during our meeting. The first was introducing ourselves, and the second was how Open played a role in our lives. There was a small conversation about open data, open access, and open education for the state of Virginia, but I wished we were able to have a longer conversation. Senator Warner is already familiar with many aspects of Open, so not too much convincing was needed, but I worried about how other groups fared.
In the end, I felt OpenCon was a great experience. I made new connections with other people from all over the world, and gained new experiences on how to talk about Open. It has also given me some ideas for a side project about using ClinicalTrials.gov data to reporting rates for various clinical trials. I hope I am lucky enough next year to attend as well, and urge everyone at Virginia Tech to learn about Open, and get involved!
The Open Science Prize, encourages experimentation with open content and open data to enable discoveries that improve health and push research forward. Six finalist projects address: FDA Trials; Emerging diseases; Mental and neurological disease modeling; Open Neuroimaging data; Rare disease research; and Global air quality.
The Wellcome Trust, the US National Institutes of Health (NIH), and the Howard Hughes Medical Institute have sponsored this award, “to stimulate the development of novel and ground-breaking tools and platforms to enable the reuse and repurposing of open digital research objects relevant to biomedical or health applications.” Further details about the contest are described in the Open Science Prize FAQ and in this Open Science Prize Vision and Overview from the BD2K Open Data Science Symposium, #BD2KOpenSci.
Fruit Fly Brain Observatory – Pools global laboratory data to facilitate the complex scientific collaboration necessary to advance computational disease models for mental and neurological diseases by connecting data related to the fly brain.
Lulu has announced the launch of a new online publishing platform that it is calling Glasstree. If you’ve heard of Lulu before you probably know it as one of several heavyweight players in the self-publishing arena, alongside Amazon (Kindle Direct), Apple (iBooks Author), and iUniverse. What makes the Glasstree announcement intriguing is that Lulu is explicitly setting its sights on “academic and scholarly authors and communities.” In other words, Lulu wants to be a scholarly book publisher.
What are the chances that Lulu’s experiment will succeed? At first glance, it sure seems unlikely. As popular as self-publishing has become (DIY titles account for over 40% of all trade eBook sales), any impact it has had on the academy has thus far been modest. After all, one of the bedrock principles of scholarly publishing is gatekeeping (i.e. letting in the good; keeping out the bad), a principle that seems fundamentally at odds with the self-publishing tenets of fast, easy, and low-cost. Indeed, DIY publishing companies pride themselves on minimizing the barriers to publication—surely a sign that Lulu faces an uphill battle. And yet, a closer look at the Glasstree website suggests that Lulu has a strategy that is at least worth watching.
To its credit, Lulu doesn’t hide its intentions. Visitors to the Glasstree home page are immediately greeted with a barrage of not-so-subtle one-liners aimed squarely at appealing to scholarly authors:
PUBLISH AND PROSPER
Glasstree Returns Control to Academic Authors
Experience Scholarly Publishing in a Whole New Way
A Better Publication Model for Academic Authors
What author doesn’t want more control over the publishing process or, for that matter, a chance to publish and prosper? You’ve certainly got my attention. Then comes the real sales pitch:
The existing academic publishing model is broken, with traditional commercial publishers charging excessive prices for books or ridiculous book publishing charges to publish Open Access books.
The give-away here is the mention of “traditional commercial publishers,” an obvious reference to the handful of conglomerate publishers that now control a sizable share of the academic monograph market—publishers including Elsevier, Springer, Wiley, and Taylor & Francis, which together churn out thousands of monographs each year at list prices that routinely exceed $100 per volume. Indeed, as one reads on it becomes clear that Lulu is appealing not so much to scholars working on their first (i.e. tenure) book but to experienced scholars; specifically, experienced scholars who have published previously with a commercial academic press and who feel burned by the experience. The following paragraphs reel off a familiar litany of complaints that one might hear outside the book exhibit hall of pretty much any scholarly conference:
Academics or their supporting institutions are poorly paid for their content. Profit margins are strongly skewed towards the publisher, with crumbs for the author and/or their employers. Submission to publication times are far too lengthy and service and marketing support insufficient.
Besides the lack of editorial assistance, marketing support, and a complete absence of urgency, traditional academic publishers are now often viewed as cherishing profits over the advancement of knowledge, and accommodate their shareholders over their authors.
Some of these complaints surely could be leveled against university presses, but the real target here is obviously commercial publishers, viz. the presses that cherish profits over advancement of knowledge while accommodating the interests of shareholders over authors. Indeed, it is this resentment-stoking aspect of Glasstree’s appeal that surely has a chance of resonating with a specific subset of authors—those both inside and outside the academy who are not subject to the pressures of tenure and promotion and therefore can afford to publish their books wherever they want. While it is hard to imagine most research universities taking a Glasstree book seriously for tenure, I can certainly see established scholars, particularly productive ones who are no longer in need of a monograph for promotion, using a service like Glasstree to publish “labor of love” books or books that grow out of side projects that wouldn’t count anyway toward career advancement—or simply books that no university press will take on. In short, Glasstree could be an attractive outlet for any number of books that typically would go to commercial academic publishers more so than university presses.
Of course, some will argue that commercial academic publishers, despite their faults, still employ peer review. It may not be as rigorous or as consistent as the peer review done by university presses, but it is certainly more than what one gets from a self-publishing company. But this is where Lulu’s plans for Glasstree really get interesting. According to the Glasstree website, Lulu is also launching Glassleaf Academic Services, which offers “peer review, all forms of editing, illustration and design, translation and professional marketing services. These services are designed for the academic community and are offered at affordable prices.” Lulu does this by offering tiered service packages (1-Star, 2-Star, & 3 Star) that start “as low as $2,625” and can go up over $8,000. Books can then be published in a variety of formats—both softcover and hardcover as well as eBooks, including Open Access eBooks.
It is unclear who will be doing all of this work but it seems that Lulu actually plans to hire living and breathing people—Content Project Managers—to at least oversee some form of peer review, copyediting, design, and marketing, even if they have some way of automating the work to exploit economies of scale. Here’s what the website specifically says about peer review:
Peer Review: Strengthening Your Content
This service is designed to save you time and effort in gathering peer reviews of your work. A Glassleaf Content Project Manager will manage the entire peer review process and consolidate feedback for you. Your Content Project Manager will compose a questionnaire and share it with you for review prior to distributing it with your content. The number of reviewers will vary according to discipline and your preference.
After the review process is complete, your Content Project Manager will provide you with the actual peer reviews and, in a summary report, will highlight significant and consistent commentary from your peers’ comments. After the report is compiled, you will meet with your Content Project Manager to review the summary of the reviewer’s commentary.
It is also worth noting that Glassleaf plans to offer 3 types of peer review: open, single blind, and double blind. Authors will be responsible for paying reviewer fees although the Content Production Manager will “negotiate the lowest possible fees on the author’s behalf.”
Once again, I want to reiterate my overall skepticism that this type of DIY publishing will have a serious impact, at least for now, on scholarly monograph publishing as it interlocks with the current T&P system. In this, university presses still have a unique role to play. Still, one can’t help but wonder if Lulu isn’t onto something. Might they have found a sweet spot between the two endpoints of the scholarly publishing spectrum, non-profit university presses on the one end and commercial publishers on the other? The missing piece for self-publishing companies like Lulu has always been quality control, but as the quality of commercially published books continues to fall and price tags continue to rise, the Glasstree model has some definite advantages. Even the pay-for-services aspect doesn’t seem so foreign now that various proposals are being considered for subvention-funded (i.e. pay-to-publish) OA monographs. Perhaps the emergence of companies like Glasstree will force us, at last, to get a grip on what it costs to produce scholarly books and, more importantly, find ways to actually drive down those costs.
No matter how you look at it, the once-staid world of scholarly publishing is getting messier and messier. And it’s only going to get more so. According to the Glasstree website, Lulu has its sights set on more than just books:
Glasstree, in its initial phase, will publish books—monographs, thesis, series, serials, textbooks, etc. (both soft and hardcover, with a range of paper types, binding types, etc.), and eBooks (including Open Access eBooks). Future phases will focus on article based publishing, journals, conference proceedings, data sets, etc.
We all need to brace ourselves for what lies ahead.