McLuhan and Determinism: I really have little choice!

Marshall McLuhan’s media work has been criticized as being technologically determinist. In other words, he supposedly argues that new media technologies (such as the printing press, radio, and TV) effectively dictate the actions of people who use the technologies.  In a determinist approach, human choice is minimized or disappears altogether.

In the world of the Internet, smart phones, and social media, one might think that the new media dictate how people communicate with others and how they act. As an example, some people contend that Facebook not only made possible the communications interactions that spurred Arab Spring demonstrations in northern Africa; it actually determined the conditions in which the demonstrations would occur.  First step: Facebook; second consequential step: demonstrations.  Simple and obvious, right?

Historians of technology (and humanists in general) tend to argue that technological determinism doesn’t really exist. People use technologies—media technologies and others—and make conscious choices about their use.  As evidence, some commentators point to the fact that people tried to spur demonstrations using Facebook a year before they proved successful.  Obviously, then, other conditions needed to exist before the Arab Spring inciters could achieve their goals.

As a human being (and humanist), I like to think that I have control over how I use technologies. Heck—I even teach this notion in my classes.  Humans are in charge—not machines!

Yet, even as I try to empower students (and disempower technology!), I sometimes wonder whether limits to my control exist. In the media world, for example, how much am I really in control?  Can I really decide to opt out of using certain new media technologies simply because I don’t like them?  Would I be able to retain my job (as a college professor) if I took steps to disengage myself from the technologies?

Sure, I could decide to avoid email, but given that everyone else uses it—even my employer does, by emailing me vital information about taxes, health-insurance plans, etc.—can I really opt out? With students using all sorts of electronic media to communicate with each other, can I really expect them to try to communicate with me without using email?  Do I really want them to phone me or stop by my office at odd times, which they would do even less frequently since they’ve become accustomed to emailing their other profs?  By imposing requirements on them that makes their lives so much more inconvenient, I would be hindering their ability to learn and to be inquisitive at a time when my job is partly to make them more informed and more critical.

In other words, while I may feel that I don’t want to use new media technologies (so, for example, I don’t feel obliged to respond to a student’s query at 10:30 PM, near my bedtime), I do so at the peril of forcing others to alter their own behaviors. And many of us don’t want to impose our values and choices on others.  In such a way, then, we conform to the standards of the time, which in this case means learning how to use these technologies even if we are not comfortable doing so.

One could argue that this situation does not reflect technological determinism since it’s other people and society in general (and my desire to be part of society) that dictates how I use technology. In other words, maybe it’s social determinism rather than technological determinism that’s at work here.  Media technologies may not dictate how I act, but because they have become so widely used, I cannot simply choose not to use them and still remain connected to modern society.

So, while the media may not exactly be the message, the popularity of those media may make it difficult for me to live my life in the absence of the message. We are too interconnected to opt out of what everyone else is doing; if we truly want to avoid using the technologies, we become disengaged from modern life.  We may have a choice, but the consequences of the choice are so extreme that it’s not really a choice after all.

Hence, maybe the existence and widespread use of the media devices make it appear that determinism (technological or social) remains alive and well.

Thanks a lot, Mr. McLuhan!

Computer Lib or Computer Fib?

We have been reading about visionaries who have (correctly) forecast some amazing things that computers can do. Most recently, we read how Ted Nelson described (in the 1970s) how computer images on screens could be resized, how people will uses their fingers instead of styli or other pointers, and how these technologies will make it possible to greatly expand and create novel ways to learn and teach.

I’m not sure I buy the hype.

To be sure, I enjoy using my iPads and iPhones, and indeed, they have enabled me to do things that I couldn’t do previously. (My favorite use of the iPad is to read the newspaper while exercising in the morning—even before the printed newspaper arrives on my driveway.)  And I truly enjoy the ability of using my computer devices to retrieve information easily.  (I now don’t need to walk to the library and spend hours locating print or microfilm journals every time I want to get an article.)

In other words, I have clearly benefited from some of the more prosaic uses of computers that many of the early pioneers imagined—namely the ability to obtain, store, and manipulate vast amounts of information using these new tools.

But have I used these new tools to enhance my teaching? I certainly have cut down on the amount of paper I distribute to students, as I put my syllabi and teaching and reading materials online.  That’s nice.  And I communicate with students by email in ways that helps me ensure that (at least theoretically) everyone knows about upcoming assignments and other events.

But I don’t think that many of my colleagues (Professor Amy Nelson at VT serving as an obvious exception) have yet taken the leap to use the new technologies in ways that truly exploit their potential. And the reason for the poor record of adoption of the new technologies is, perhaps, because they still remain so difficult to use.  To develop her incredibly innovative new-media Russian history class, Dr. Nelson obtained a grant that bought out some of her teaching time and provided funds to purchase specialized software.  Without these extra resources, would she have been able (or willing) to experiment with the new technologies?  I doubt it.

And in many cases, some of us are not trying to get students to be truly creative and innovative thinkers. Rather, we’re trying to get them to develop the elementary skills they need as a prerequisite to become creative and innovative.  By this, I mean that we sometimes have to teach students the basics—not the creative stuff—such as how to write simple and grammatically correct sentences.  We may have hoped that such learning had already been accomplished in the elementary and secondary public schools.  But too often, students come to college without these skills, and we spend inordinately too much time re-teaching them.

Perhaps in this realm—of teaching basic skills—we can envision and profitably use computers, though not in terribly creative ways. Without much assistance from an instructor, computers can perform repetitive (and mind numbing, but mind reinforcing) exercises with students so they learn, once and for all, the various forms of the verb “to be,” for example.  By doing so, they can avoid using the verb excessively and happily escape writing in passive voice—a real no-no in the field of history.

So, maybe there really is value in using computers in education—though not always in ways that expand the horizons of learning. And thank you, Amy, for showing us that you can really do some amazing things with computers in (and out of) the classroom.

But for me, it still seems that computer visionaries—Ted Nelson included—have not yet created a technology that is designed well enough for the ultimate users. As I’ve noted elsewhere in the blogosphere, too often computer technologies are designed with the designers—and not the users—in mind.

To claim that the current uses of computers is lib—as in liberating—is therefore a fib, as in a small or trivial lie. The potential for liberation and intellectual creativity is there, but for too many of us who are hampered by poorly designed computer hardware and software, the potential has not yet been achieved.

I ate lunch with Douglas Engelbart!

In 2001, I ate lunch with Doug Engelbart—the guy who invented the mouse. Too bad I didn’t know anything about him at the time.

I was attending the annual meeting of my professional association, the Society for the History of Technology, in Pasadena, California. Turned out that Mr. Engelbart was an invited guest speaker at the conference, and he planned to speak later in the day when I first encountered him.  I spied this person whom I didn’t know (my society is small, and I know most of the folks who come to the meetings), and he was seated at a table by himself eating lunch.  I felt badly that he didn’t have any company, so I sat across from him and started talking.  He was quite friendly and modest, and while I asked him about himself, he didn’t tell me much.  Only later did I realize that I had been talking to a legend in new media history.

Had I known more, perhaps I would have asked him if he believed whether he had realized his goal of making life less complex through the use of computers. To be sure, he devised technologies and techniques that made it much easier for the average Joe (and even the unaverage Josephine who works professionally with computers) to interact with machines and to create new tools.  Even in the world of high-tech, have his tools made things less complex?

One can certainly point to examples in which his vision has come true, such as the devices we use to retrieve information and do repeated tasks. And of course, who would give up his or her word processor (with the graphical interface we now take for granted) for a typewriter?

But in some cases, the use of computers has made people think they can do complex things more easily when, in fact, they can’t. Consider technologies such as nuclear power plants, which are inherently complex and which sociologist Charles Perrow says is one of several technologies that are essentially unknowable.  Worse than that, they cannot be designed to avoid having accidents.  In fact, they are destined to have, what he calls, “normal accidents.”

Normal accidents occur in systems in which “the parts are highly interactive, or ‘tightly coupled,’ and the interaction amplifies the effects in incomprehensive, unpredictable, unanticipated, and unpreventable ways.” (Charles Perrow, “Normal Accident at Three Mile Island,” Society 18, no. 5 (1981): 17-26; also see Charles Perrow, Normal Accidents: Living with High-Risk Technologies [New York: Basic Books, 1984].)  He argues that no human or computer can anticipate all the interactions that can possibly occur in such a system, leading to inevitable accidents.  Many of these accidents already have occurred, and some with tragic consequences, such as at the Three Mile Island nuclear power plant in 1979, at the Bhopal petrochemical plant in 1984, in electric power systems (which collapsed in parts of the US in 1965, 1971, and 2003), and so on.

While one can quibble with some of Perrow’s arguments, he suggests persuasively (in my mind, at least) that no matter how one may try, it’s not likely that humans can understand the consequences of every interaction of a large number of components in a system. Even the fanciest computer needs to be programmed by a human being, and that human can’t imagine every way in which a physical system’s components may intensify a mistake or defeat the best efforts of a human operator.

So, Doug, I wish I knew what I do now about your work so we could have had a more engaging talk 13 years ago. I take the blame for my ignorance.  Sorry about that.  Let’s hope, though, that when I eat lunch with Charles Perrow next time, I’ll be able to ask him about whether he thinks your work has made technology less complex and less prone to screw up.

Cybernetics, symbiosis, and my messy room

When I was a teenager in the 1960s, I read a book on the novel topic of cybernetics.   Designed for young readers, the book foretold how computer-driven machines would allow us humans to do wonderful things in the near future.  I looked forward to being able to do a lot more creative things in much less time, and I had hoped that the cyber-controlled devices of the future would do all the menial work (such as cleaning my messy room!) for me.

Alas, the future has not yet arrived.

When reading the articles by Wiener and Licklider (written in 1954 and 1960 respectively), I remain amazed that they anticipated how computers would soon hold vast amounts of information and be able to recognize speech.   And I appreciate Wiener’s concern that humans should be careful as they use these new machines and not idolize them.  (Apple products fans–pay attention to this guy!)

Even so, while I realize that the new cyber-machines allow me to do things that once took me a great amount of time (just think of all those trips to the library to find a single, and ultimately useless, article!), I’m not sure that I’ve become much more creative as a thinker or educator.  I can obtain information easier, but have the machines helped me transform it into knowledge, insight, or wisdom?

Maybe the cyber-machines would be more useful if the institutions in which I work  would allow me more time to use the devices creatively.  At my university, for example, the machines make it possible for administrators to push down to us faculty members tasks that used to be performed by others.  In the olden days, I recall handing off my final grades to a secretary, who passed the data to others, who entered them into the massive mainframe computer.  Now I enter the info into a PC myself.  And when I write grant proposals with colleagues, I am expected to find detailed information that previously was obtained by a lower-paid staffer.

Likewise, I waste hours of my life trying to make sure my cyber-machines have the latest software and do not contain viruses in them.  And how often have I spent hours trying to fix a computer glitch only to wish that I had left everything the way it was, seeing that the “fix” was worse than the original problem?

Maybe one big difference between the average user and folks like Wiener and Licklider is that they more intimately understood the machines with which they worked.  Wiener wrote that “If we want to live with the machine, we must understand the machine.”  But who understands the cyber-devices that we increasingly depend on? Consequently, when something goes wrong in a cyber-machine today, most of us need to find someone else to help us fix it, thus eating more time that could have been spent being creative.  (On the other hand, look at all the jobs that have been generated to serve us idiots!)

Don’t get me wrong.  I’d rather fight than give back my iPad or iPhone.  Still, the wonderful symbiotic relationship between humans and machines that Wiener and Licklider envisioned has not yet been realized, in my view.  And the nonrealization of the utopian symbiosis is not the result of inadequate hardware development.  Rather, it derives from the ways people created expectations of how we should use the time freed up by not having to spend so many hours plotting graphs by hand.

As is typical ofinteractions between people and machines throughout history, new technologies emerge and evolve within a social context.  Despite images of a life free from drudgery  (replaced by a life of leisure, contemplation, and creativity) that many of us expected when we read about cybernetics four decades ago, we still  spend too much time using the new machines to do menial stuff we don’t really want to do.

And worse than that, my room is still a mess!