More on Wikipedia

A little while back, I discussed the secret use of Wikipedia and a possible grade Hypocrisy. However, to briefly continue that conversation, the American Historical Association, today, posted an article by historian Peter Webster titled Wikipedia, Authority and the Free rider Problem. The article actually makes many of the same assertions and points that mine did. For instance, he does not accept Wikipedia as a proper source, but believes it to be a reliable source outside of highly loaded topics, and encourages the use of Wikipedia as a way to find sources. Perhaps, if I was a more famous academic then my own blog post could have been posted by the AHA instead of his; or, maybe, I can still get him for plagiarism–joking of course. But regardless, his post is insightful  and I encourage the read. He actually does expand on the idea of experts updating Wikipedia pages so to make them more credible as a communal source, though that would become fairly problematic when considering who gets credit for the labor they put into writing and updating it. That is why Webster considers himself a selfish Wikipedia user, of which my first Wikipedia post accused all of us of being: He takes relentlessly from Wikipedia, giving no credit to the actual author of the page, just the citations at the bottom, and never updates it himself. But what happens if you do update the page and cite your own work as the source? Could that be a way to take credit and expose your own research to wider audiences (if the page isn’t too obscure)?

Outsourced Grading and the Degradation of the Modern Professoriate

I stole the link in this post from my friend and colleague Sascha Engel’s Facebook, so I first want to give credit where credit is due. But this article from the Chronicle of Higher Education’s website discusses the recent phenomenon of “outsourced grading.” Basically, outsourced grading is exactly what it sounds like: professors and teaching assistants upload student’s papers to the web and send them to a private company (based in India) for grading. EduMetry Inc., the parent company, says “the goal of their Virtual-TA service is to relieve professors and teaching assistants of a traditional and sometimes tiresome task—and even to do it better than TA’s can.”

The article stopped me dead in my tracks. On one hand, I have never even thought of such a service–hats off to the capitalist who saw the opening in the market. And on the other hand, I thought, how convenient. As a doctoral student and the teacher of record for a modern American history survey, I am currently swamped with grading my student’s book reports, writing their final exam, and writing research/historiographical term papers for the courses that I myself am enrolled in–understandably, I really wish I could outsource those book reports. But almost immediately–perhaps on a third hand?–I started thinking about the pitfalls of this service. EduMetry, according to the article, actually does foster some communication between the teachers and the graders, but how well can graders in India–that have never met the professor face to face, actually provide in-depth feedback for the students? Sure, they may have the time to point out grammatical and spelling errors that rushed professors often miss, but it is the content that is important–at least in my classroom. Maybe math and business classes would be different. Additionally, EduMetry claims that all of their graders have advanced degrees, at least a Masters. This, quite evidently, is wide-open for exploitation. Honestly, they could just be lying. I mean is anyone going to check the credentials of the graders? Is it really even possibly? But also, higher education in India, outside of the top caste, is questionable at best. This Forbes article highlights some of those points: faculty shortages at public universities, out-of-date materials, and questionably mastery of basic subjects by graduates. So, in short, I don’t trust EduMetry to grade my student’s papers.

There is something else that is very unsettling about this. I asked myself, as did Sascha, are we–professors, teachers, assistants–becoming obsolete. To me, this just appears as another means that private enterprise, along with neoliberal economic policies, are encroaching on the public university system. Undoubtedly, outsourcing grading is not as serious as the budget cuts and hyper-competitive university management paradigms that are being increasingly called for by conservative lawmakers, but it is connected in a way. As tenure track positions become harder to find, entire departments get cut, and an ever-growing demand for cheaper adjunct professors, this seems to be only one more step in the direction of a “more efficient,” business-minded system of higher education. If college courses continue to be increasingly relegated to online “classrooms” and a class of underpaid adjuncts (moderators) that are spread thin as it is, and now the grading of written papers (an intimate medium between professor and student that is meant to not only better writing practiced but provide insight into complex issues and foster the development of the student), then what will happen to traditional education? Call me outdated or idealistic if you will, but, to me, the mission of the American college/university was, and is, to develop civic virtue, foster diversity and interaction, and to confront tough questions that the business sector avoids. How can this be accomplished if the distance between professor and student continues to expand on the heels of increased classroom mediation by technology and privatized services? I, for one, do not want someone else doing my job for me, no matter how stressful it is to make sure I get my student’s book reports graded in a reasonable amount of time. If this trend continues, then I may not have a job for much longer.

Secret Wikipedia and Possible “Grade” Hypocrisy?

Yesterday, my group mates and I were discussing what to do if a student cites Wikipedia as a source on a graded project. And this discussion led to some interesting insights about how Wikipedia can and is used. On one hand, we all agreed that a Wikipedia entry could and should not be accepted as viable source material. The fact that Wikipedia “is a collaboratively edited, multilingual, free Internet encyclopedia” obviously leaves it open to inaccuracy and thus too much risk (I got that brief definition off of Wikipedia’s own Wikipedia page—I imagine it is monitored pretty well). That point is superfluous, but what was interesting is that everyone in our group, and I imagine class, confirmed using Wikipedia regularly, almost on a daily basis. So we began to wonder if it was hypocritical to keep our students off of it. Understandably, we concluded that it was not hypocritical for several reasons: 1) We all stated that we used the website to either satisfy simple curiosities or as starting points for basic research. 2) We all stated that if we did use Wikipedia to find something for academic purposes, then we always checked the citation on the page and then confirmed if the citation was real and accurate—also we figured out that there is a lot of legwork involved in using Wikipedia as a starting point. 3) We all stated that Wikipedia entries regarding more obscure topics in our fields were actually highly reliable and that Wikipedia itself has become increasingly regulated as to take care of blatantly inaccurate edits. With regard to that last comment, personally I can attest to adding an entry on one of my friends, as a joke, to the “Notable Residents” residents section of the small, obscure Iowa county that he grew up in—Wikipedia had it removed in under 20 minutes. So if Wikipedia is increasingly becoming a more accurate website, that is highly policed, then what can we make of it as academics? Naturally, to answer that question, I turned to other academics.

In the article “Academics’ Views On and Uses of Wikipedia,” in gnovis: Georgetown University’s peer-reviewed Journal of Communication, Culture, and Technology, five academics, from various disciplines, were asked to express their views on the use of Wikipedia and other Web 2.0 technologies within their institutions. And again, as in my group’s discussion, the author of the article highlighted several common trends among the subjects:

  1. All but one of the interviewees indicated that they use and actively encourage their students to use Wikipedia as a starting point for research.
  2. All interviewees indicated that they use Wikipedia to look up definitions and descriptions of concepts and theories for both professional and personal interests.
  3. All but one of the interviewees stated that Wikipedia is more up-do-date than many other sources that are considered legitimate by professionals. Not only did they mostly find it accurate, but one interviewee found that contemporary topics regarding his own research were accurately updated within one minute of the event or information being recognized publicly.
  4. Two interviewees stated that the communal nature of Wikipedia expresses a higher variety of perspectives on a topic than a journal article or textbook.
  5. All interviewees stated that they thought Wikipedia was relatively accurate, but that they did approach it with skepticism. All indicated that other scholarly sources and encyclopedias were more credible and reliable.
  6. All interviewees abhorred the idea of citing Wikipedia in an academic research study.
  7. Most interviewees stated that the “technical and factual” information on Wikipedia is reliable, but the interpretive value of those facts varies depending on the entry.

Personally, I found some of those common themes interesting, especially the frequency with which the interviewees used Wikipedia. However, there were three trends in the research (not listed above) that I not only find interesting, but am also weary of. The first is that two of the interviewees stated that they would, and do, let students in introductory courses cite Wikipedia in their work and do actively use Wikipedia for those courses. This is worrisome to me because I do not necessarily agree with setting a standard in introductory courses that leads students to believe that Wikipedia is a reliable source for academics. But is this not hypocritical, especially being that we all appear to use Wikipedia? Regardless, I was then troubled by the contentions that all interviewees took regarding the role of academics in Wikipedia: each believed that Wikipedia serves its purpose under its current structure and expressed negative views concerning an increased role of academics in writing and updating Wikipedia entries. Given that the interviewees used Wikipedia themselves, and two used it in the classroom, why would they not be in favor of more academics writing for the website? And finally, and perhaps most troubling, none of the interviewees knew anything about the guiding policies of content generation on Wikipedia nor had ever changed or updated online material on the site.

So what do we do with Wikipedia articles? We all use it (don’t lie, I know you do), but I cannot support using it as an academic source, even for introductory courses. But I do find it valid for finding general information and as a starting point for basic research, so long as the source can be verified. Additionally, I do not know if Wikipedia will ever bridge the small gap it needs to be considered credible, no matter how well policed it is or how expansive its web of editors are. Perhaps all we can do is continue to not, under any circumstance, let a student cite it, but to keep using Wikipedia in secret ourselves—not giving it any credit when stealing its sources to jump start our own research. I do understand that Wikipedia bypasses normal  academic intellectual property laws due to its basic structure. But is it not hypocritical for academics that must adhere to those regulations to use it as a kickstarter? It’s at least an interesting loophole.

PBL: A Disciplinary Critique

In a recent post—that I consider a valuable read for all involved in pedagogy—my esteemed colleague Sascha Engel criticized Problem Based Learning (PBL) as a “political stratagem” and for “reducing politics, and its science, theory, and philosophy, to mere ‘ethical problems.’” And while I concur with many of his thought provoking assertions, and sympathize with his concern for the social sciences in today’s economic climate, I would like to convey some additional thoughts from within another discipline—even though many of the issues that Sascha and I face are, inherently, political.

PBL in the history classroom faces similar concerns to Sascha’s but also some that are unique to the discipline. On one hand, and this should be obvious, I do not have as many opportunities to implement a case study like the ones our groups are putting together in class. I simply cannot make up a case study—and, if I remember correctly, in one video we watched earlier in the semester there was an instance where a high school history teacher was creating fictional historical case studies in her classroom—which is something that I adamantly disagree with. That is not history, nor the practice of the historian. However, I do think that real historical case studies can be given to help inform decisions regarding current issues. For instance, my own group is putting together a fictional case study on the plausibility of hydraulic fracturing, or fracking, on private land in Appalachia, and there are certain insights that can be gained by providing a longer history—containing numerous primary sources and secondary literature—of mineral rights within the region. But this practice also reduces history to a mere ethical problem for a panel of experts and students and is in great danger of throwing away any notions of historical empathy. In other words, PBL case studies in the history classroom (that seek to find an answer to a particular ethical, political, or economic decision or issue) make it easy to forget that real decisions are, and were, already made for reasons that no panel of experts may be able to account for. I mean, again with regard to mineral rights, it is easy to understand the environmental issues, but can we truly sympathize with the true weight that the selling of those rights entails for individuals of an exploited and impoverished region of the United States? I would argue that PBL would not foster that level of understanding—that level of empathy for very real emotions—and instead would focus the panel entirely on the economic, scientific, and environmental vitality of the business of resource extraction. And that historical and contemporary empathy, or at least an attempt at it, I think, is one of the roles of the historian.

Above, I attempted to put words to a difficult problem, and I am not sure how successful I was, but structurally, PBL is entirely different in the history classroom (once the practice of making up historical case studies is thrown away, which I must stress, needs to be). That is, historical case studies where students determine an answer based on their judgment and discretion are antithetical to the discipline—those decisions have already been made. I can, and do, however, provide numerous primary sources (political documents, speeches, maps and guide books, diary entries, fiction, music, film, music, etc.) in order to give my students a glimpse at how historians practice and think about history (systematically and empathetically). And we do go through this practice together, each of us hopefully learning something along the way. This is what historian Lendol Calder calls “uncoverage,” and despite its usefulness, it too has its limitations. First, historians need context and they need to be privy to the current debates in the field—most students simply do not have this luxury coming into the class. And that is why historians must lecture: students do not understand the context of the primary document and complex scholarship must be synthesized for them (which is difficult to do in a mere fifty minutes) in order for them to place historical events, documents, and actors within said contexts. And history is important in this regard. The historical discussion itself is a discussion of the present and those they do have real world ramifications (politically, socially, and economically). That is, I think, why there is a place for PBL, with regard to modern case studies (real or hypothetical) and room integrate historical documents into them. But as a teacher of history, I simply cannot teach my undergraduates entirely with, or with a majority of, case studies or uncovereage, no matter how many cutting-edge scientific studies say that I should because students don’t learn as well within a lecture setting. And there is merit to those studies; I cannot thrown them away either. Honestly, the only answer that I have arrived at is that my class periods must consist of a combination of lectures, discussions, and moments to “uncover” history by analyzing primary sources. To what extent each of those are important, which also depends on the level of the class, I am still unsure. But that is my role as the teacher to figure out; I must learn to teach as my students learn how to “do” history. For the sake of all of us, the problem solving skills and conversations taught in history classroom are still important to our modern world. And those skills, and those conversations, cannot be completely accommodated for within the PBL paradigm that seems to be being put on us to use if we plan to keep up in our fields. But history needs context; it wouldn’t make much sense without it. And context needs more than lectures, PBL, and scientific knowledge.

Uncoverage in the Classroom: An Alternative to Tradition

As a teacher of history, I, along with all other historians, have put a lot of time into thinking about coverage in the classroom–it has basically become a necessary line of thinking. But for those of you interested, I am posting an alternative: “Uncoverage: Toward a Signature Pedagogy for the History Classroom”. Though most of you do not teach history, this article is still relevant to other disciplines in the social sciences and humanities, who teach surveys, and its relatively short. So give it a read if it interests you.

A couple relevant quotes from within:

“Cognitive scientists have proven the problem with defenders of traditional surveys is that they do not care about facts enough to inquire into the nature of how people learn them. Built on wobbly, lay theories of human cognition, coverage-oriented surveys must share the blame for Americans’ deplorable ignorance of [insert discipline here].”

“Can beginning students learn to do [insert discipline here] the way professionals do it? Of course not…I make no attempt to cover the topics thoroughly or to provide a seamless, authoritative narrative or argument. Rather, the problem areas become opportunities for students and the teacher to do [insert discipline here] themselves.”

A Short Video on American Meritocracy and Black History Month

Being that it is Black History Month, I am sure that some of you have heard people say, “Where is White History Month? If everyone is equal, then why do black people get an entire month dedicated to their history?” If you haven’t heard a white person ask those questions before (snarky they may be), I assure you that those people are out there, and I have encountered many of them. So, I am posting this short and easy to understand video, The Definitive Response to Jerks Asking, ‘But What About White History Month?,’ because it addresses these questions while providing solid commentary on some of the topics we discussed in last night’s class. I, being in the fields of History and Sociology, have conducted a fair amount of research on the history of race and meritocracy in the United States, and, I must say, this video is worth watching. I actually plan to show it to my students the next time I lecture on race/ethnicity and the struggle for equality. Also, the links that are mentioned at the end are worth checking out too.

Backward Revision in the History Classroom: Yet Another Attempt

The other day, as I was procrastinating from my more important responsibilities on the internet, I was alerted to a January, 2013, study conducted by the National Association of Scholars (NAS) titled Recasting History: Are Race, Class, and Gender Dominating American History? For those of you who do not know what the NAS is, its mission statement describes it as “an independent membership association of academics and others working to foster intellectual freedom and to sustain the tradition of reasoned scholarship and civil debate in America’s colleges and universities.” But if you want to truly understand the flavor of this organization, check out a list of their issues and ideals here (thank goodness they are tackling such pressing intellectual issues as “partying and hook-up culture”) and board member Irving Kristol’s attack on the “anti-capitalist aspirations” of the Left on college campuses, political correctness, and multiculturalism here.

So, basically the NAS leans pretty far to the right, needless to say. However, its regular reports and the university faculty and politicians that comprise the organization do have a significant voice, especially outside of Academe (for instance, notable NAS board members include Republican super-stars like Mr. Kristol, mentioned above, former ambassador and Reagan advisor Jeane Kirkpatrick, education activist Chester Finn, and founder Stephen Balch, who received the National Humanities Medal from President George W. Bush in 2007). So, due to the resonance of this organization in the media and in the world of politics (I’m not even completely sure if the two can be separated anymore), I did take the Recasting History study seriously, and offer my response.

In the study of history course listings, syllabi, and reading lists at the University of Texas and Texas A&M University, the NAS concluded that “traditional” fields of historical analysis—military, diplomatic, and economic—are being overshadowed, and thus drowned out, by race, class, and gender based study. But are these categories of analysis so rigid that one covers up the other? Absolutely they are not; race, gender, and class are important categories of analysis in all historical fields, not just their own. In fact, diplomatic and military histories are enriched by adding race to the analysis. It is nearly impossible to conceive of an American history course on the Old South, Reconstruction, Jim Crow, or the mid-twentieth century without a combination of racial and political sources; the categories can simply not be separated. As for economic and military histories, the stakes are equally as high: how could one truly understand employment and consumption patterns without considering how gender and race affect them? And how could the history of the little-known First Battle of Saltville (an interesting Confederate victory known for the strategic importance of the town’s saltworks, number of fatalities, and complex troop movements) be complete without considering the racial motivations behind the massacre of numerous wounded black soldiers following the retreat of Federal troops? The answers to those questions are that we couldn’t understand those histories, nor would they be complete without race, gender, and class in the equation. At best, the exclusion of these modes of analysis would create what I call history-buff history (i.e. troop movements, names of key actors, and obscure facts about a location or time that holds little significance without context), and, at worst, would result in out-right white-washing of the historical record.

The only thing in the study that the NAS and I agree upon is the continued importance of history education in the modern university, but this call to move back to a form of historical inquiry that predates the 1960s is horribly misguided and antithetical to the goals of the profession. Additionally, with Texas lawmakers attempting to revise American history textbooks to “untarnish” the image of America, the increased willingness of Republican lawmakers to gut the humanities and social science departments at state universities to save money, and the president-to-president laden history and civic classes in primary schools to meet the No Child Left Behind agenda, this call to drastically revise history curriculums by cutting back race, gender, and class based analyses couldn’t come at a worse time. If anything, too many students arrive at college with inadequate knowledge of history and social studies because of shortsighted student measurement systems that promote memorizing a certain set of criteria and in turn hampering the student’s ability to problem solve and correctly analyze historical scenarios, power structures, and larger social and cultural contexts. And this is precisely why race, gender, class, and, I would argue sexuality, must not be removed, or even cut-back, in the classroom: because that’s exactly what they foster, problem solving. If the NAS is truly concerned about the teaching of history in college classrooms, then they should set politics aside (which I believe their study is more concerned with) and welcome all types of historical inquiry that seek to fill the gaps of the outdated, and unrealistic, paradigms that they are championing.

Is Blogging Increasing Democracy? Is This A Good Thing?

Blogging is something that I am not unfamiliar with: I follow blogs, have favorite and least favorite blogs, and understand how blogs have recently changed the academic experience for both professors and students. But until today, I had never blogged. So, in preparation, I scoured the back of my mind in order to distill a topic that somehow seemed “worthy” of posting to a public forum. And that led me to an interesting realization: I do not have anxiety about putting my words, and by extension myself, out there for all to read and see. I do not fear being deemed unworthy. Actually, I discovered that most of my anxiety over blogging comes from putting my words, which may or may not be valuable, into obscurity. Blogging could be a waste of time and energy that could be spent elsewhere.

It seems to me that blogging, as is the case with many facets of the continuing Digital Revolution, has led to increased democratization of ideas and content. That is, everyone can have a blog, they can blog about anything, and their words are thus open for public consumption. In fact, there seems to be little that can’t be found or put on the internet; all that can be imagined can be released to a public forum and then extracted (perhaps a fee may be required for some things, but still, this is only a minor restriction—if one wants to consume it, they will consume it). So, my next thought was that this is undoubtedly positive: Hurray digitization—a voice has finally been given to the masses! But then another side of this democratization (a term that I still am not entirely sure defines what I am talking about here) began to reveal itself. If more and more people are regularly publishing blogs, about a seemingly limitless litany of topics, then will my blog (my words) be seen? How could someone possibly find my blog amid the chaos seeking the same recognition that I am? Recognition of one’s ideas is the point, right? And who is to say that those that are recognized merit being recognized? (At this point–I wish I had a footnote for this—do note that I am not using the term “recognized” as meaning famous. I am also referring to someone that has garnered at least a small, but significant following of friends and peers, and that does produce discourse that resonates with even a small fraction of a population. A class blog could count because it is still accessible to the larger population of the internet. But I digressed, so back to the topic at hand). I am absolutely sure that somewhere, someone has a blog full of insightful ruminations that is rarely seen and continues to go unrecognized. Perhaps, for whatever reason, that may be that person’s intention—or he/she wants to keep their audience small and inclusive—but I am sure that there are plenty of people who desire to be heard, and that may have something valuable to contribute to public discourse, but continue to be drowned out by the endless chatter and sensationalist headlines of the internet. I do not believe that this person is to blame for their un-recognition either. They may merely want to remain pure to their topics; to not deceive with blatantly false, but intriguing titles; or have not received a certain amount of circumstantial luck that led to being “discovered” by a fan-base. This, then, is the side of blogging that causes me the most angst. If I am going to blog—and yes, there is a side of me that sees value in blogging and desires to enter the fray—then I do want to be heard. I do not want to exist in a void of obscurity. And, perhaps, even if just one or two people read my blog, and it speaks to them in some way, or provokes a thought, then that will be enough to encourage me to continue blogging. But as more and more people begin to blog, and the internet continues to link us to an incomprehensible pile of information, we expect the most valuable and the most talented bloggers, intellectuals, and social commentators to rise above the nonsense, the incorrect, and the not-valuable. But this may not be the case in an age where everyone has an equal claim to a realm of discourse that requires little, if any, credentials. The internet (dare I say blogosphere; actually, I think I hate that word for some reason) may be getting too democratic and far too loud.