I never got into computer games. What’s wrong with me?

I remember going with friends to a bar sometime in the 1970s in which the game “Pong” became the centerpiece of activities.  My friends and I competed to see who could get the highest score.  I found the game fun, but I didn’t want to throw away all those quarters just to see if I could do better than my buddies.  Some of those guys wasted a huge amount of money on an addictive activity, I thought.

As Sherry Turkle tells us, however, maybe my friends’ activity did not constitute addiction.  Nor was it simply a form of hypnotic fascination.  Rather, these people allowed themselves—or unconsciously found it attractive—to enter the unusual culture of the computer.  In this culture, they appreciated a world in which physical boundaries (and the laws of nature in the real world) did not exist.  Balls (or bullets, in more sophisticated games that emerged later) did not necessarily follow the rules that Galileo had deduced for mechanical bodies in motion.  Nor did life itself (or its simulation on screen) have much meaning, since the machines gave the gamesters more than one life, should they fail to achieve a level of competency and get killed off.

Perhaps more important, computer gamers often find that the activity gives them a sense of control and security that they could not achieve in the real world.  They have learned to “break” the computer code and understand the rules (liminal and subliminal) established by the games’ coders, and they obtain psychic pleasure from doing so.  (They also obtain physiological pleasure, as their heart rates and blood pressures zoom upward.)  Even when playing new games, they realize they have developed an insight into the way the games work, and they can do better than others in ways that yield an unusually profound sense of satisfaction.  It’s a sense of satisfaction that differs from what traditional, mechanical games, such as pinball, can provide.  And certainly, that control and success contrast markedly with what they experienced in their mundane, real-world lives.

Sometimes, as Turkle points out, the games’ power comes from the process by which the player gets involved in the games themselves.  She or he eventually gets immersed in the experience of the game such that it takes over the person’s psyche.  The process of immersing oneself in the game—entering a different world and state of mind—becomes much more important than the goal of achieving a specific endpoint (such as achieving a high score).

Can such an immersion occur outside of computer games?  Yes, she says.  In fact, Turkle compares this process of immersion to one in which a type of dieter seeks to lose a little more weight each week, even though she or he may have already reached the medically desirable goal.  At some point, the dieter becomes more fixed on the game of dieting and gaining points (while losing kilograms) than in actually achieving a healthy weight.  Perhaps this immersion helps explain a genre of psychologically based eating disorders.

All this brings me back to my original observation that, unlike so many of my contemporaries (e.g., old people) and our kids, I never became enamored by computer games.  Why not?  What’s wrong with me?  Am I simply so well-adjusted that I didn’t feel a need to obtain control of my world by entering another one?

Maybe I am incredibly well adjusted (yeah, right!).  Or maybe I simply feel comfortable enough in my real world that I don’t need to find other avenues for control, excitement, and satisfaction.  Or maybe I just decided, early  in the computer era, that I don’t want to spend any more time in front of a computer than I already do.

Of course, such a (condescending) explanation makes computer gamers seem like poorly adjusted and undisciplined souls.  I sincerely doubt that conclusion, so maybe there’s something else that I’m missing about the value of computer games—something that maybe even Turkle hadn’t identified in her 1984 book, The Second Self. 

Let’s talk some about some of these other psychological rewards derived from computers (which are clearly more than simple tools) on Thursday afternoon.

 

McLuhan and Determinism: I really have little choice!

Marshall McLuhan’s media work has been criticized as being technologically determinist. In other words, he supposedly argues that new media technologies (such as the printing press, radio, and TV) effectively dictate the actions of people who use the technologies.  In a determinist approach, human choice is minimized or disappears altogether.

In the world of the Internet, smart phones, and social media, one might think that the new media dictate how people communicate with others and how they act. As an example, some people contend that Facebook not only made possible the communications interactions that spurred Arab Spring demonstrations in northern Africa; it actually determined the conditions in which the demonstrations would occur.  First step: Facebook; second consequential step: demonstrations.  Simple and obvious, right?

Historians of technology (and humanists in general) tend to argue that technological determinism doesn’t really exist. People use technologies—media technologies and others—and make conscious choices about their use.  As evidence, some commentators point to the fact that people tried to spur demonstrations using Facebook a year before they proved successful.  Obviously, then, other conditions needed to exist before the Arab Spring inciters could achieve their goals.

As a human being (and humanist), I like to think that I have control over how I use technologies. Heck—I even teach this notion in my classes.  Humans are in charge—not machines!

Yet, even as I try to empower students (and disempower technology!), I sometimes wonder whether limits to my control exist. In the media world, for example, how much am I really in control?  Can I really decide to opt out of using certain new media technologies simply because I don’t like them?  Would I be able to retain my job (as a college professor) if I took steps to disengage myself from the technologies?

Sure, I could decide to avoid email, but given that everyone else uses it—even my employer does, by emailing me vital information about taxes, health-insurance plans, etc.—can I really opt out? With students using all sorts of electronic media to communicate with each other, can I really expect them to try to communicate with me without using email?  Do I really want them to phone me or stop by my office at odd times, which they would do even less frequently since they’ve become accustomed to emailing their other profs?  By imposing requirements on them that makes their lives so much more inconvenient, I would be hindering their ability to learn and to be inquisitive at a time when my job is partly to make them more informed and more critical.

In other words, while I may feel that I don’t want to use new media technologies (so, for example, I don’t feel obliged to respond to a student’s query at 10:30 PM, near my bedtime), I do so at the peril of forcing others to alter their own behaviors. And many of us don’t want to impose our values and choices on others.  In such a way, then, we conform to the standards of the time, which in this case means learning how to use these technologies even if we are not comfortable doing so.

One could argue that this situation does not reflect technological determinism since it’s other people and society in general (and my desire to be part of society) that dictates how I use technology. In other words, maybe it’s social determinism rather than technological determinism that’s at work here.  Media technologies may not dictate how I act, but because they have become so widely used, I cannot simply choose not to use them and still remain connected to modern society.

So, while the media may not exactly be the message, the popularity of those media may make it difficult for me to live my life in the absence of the message. We are too interconnected to opt out of what everyone else is doing; if we truly want to avoid using the technologies, we become disengaged from modern life.  We may have a choice, but the consequences of the choice are so extreme that it’s not really a choice after all.

Hence, maybe the existence and widespread use of the media devices make it appear that determinism (technological or social) remains alive and well.

Thanks a lot, Mr. McLuhan!

Computer Lib or Computer Fib?

We have been reading about visionaries who have (correctly) forecast some amazing things that computers can do. Most recently, we read how Ted Nelson described (in the 1970s) how computer images on screens could be resized, how people will uses their fingers instead of styli or other pointers, and how these technologies will make it possible to greatly expand and create novel ways to learn and teach.

I’m not sure I buy the hype.

To be sure, I enjoy using my iPads and iPhones, and indeed, they have enabled me to do things that I couldn’t do previously. (My favorite use of the iPad is to read the newspaper while exercising in the morning—even before the printed newspaper arrives on my driveway.)  And I truly enjoy the ability of using my computer devices to retrieve information easily.  (I now don’t need to walk to the library and spend hours locating print or microfilm journals every time I want to get an article.)

In other words, I have clearly benefited from some of the more prosaic uses of computers that many of the early pioneers imagined—namely the ability to obtain, store, and manipulate vast amounts of information using these new tools.

But have I used these new tools to enhance my teaching? I certainly have cut down on the amount of paper I distribute to students, as I put my syllabi and teaching and reading materials online.  That’s nice.  And I communicate with students by email in ways that helps me ensure that (at least theoretically) everyone knows about upcoming assignments and other events.

But I don’t think that many of my colleagues (Professor Amy Nelson at VT serving as an obvious exception) have yet taken the leap to use the new technologies in ways that truly exploit their potential. And the reason for the poor record of adoption of the new technologies is, perhaps, because they still remain so difficult to use.  To develop her incredibly innovative new-media Russian history class, Dr. Nelson obtained a grant that bought out some of her teaching time and provided funds to purchase specialized software.  Without these extra resources, would she have been able (or willing) to experiment with the new technologies?  I doubt it.

And in many cases, some of us are not trying to get students to be truly creative and innovative thinkers. Rather, we’re trying to get them to develop the elementary skills they need as a prerequisite to become creative and innovative.  By this, I mean that we sometimes have to teach students the basics—not the creative stuff—such as how to write simple and grammatically correct sentences.  We may have hoped that such learning had already been accomplished in the elementary and secondary public schools.  But too often, students come to college without these skills, and we spend inordinately too much time re-teaching them.

Perhaps in this realm—of teaching basic skills—we can envision and profitably use computers, though not in terribly creative ways. Without much assistance from an instructor, computers can perform repetitive (and mind numbing, but mind reinforcing) exercises with students so they learn, once and for all, the various forms of the verb “to be,” for example.  By doing so, they can avoid using the verb excessively and happily escape writing in passive voice—a real no-no in the field of history.

So, maybe there really is value in using computers in education—though not always in ways that expand the horizons of learning. And thank you, Amy, for showing us that you can really do some amazing things with computers in (and out of) the classroom.

But for me, it still seems that computer visionaries—Ted Nelson included—have not yet created a technology that is designed well enough for the ultimate users. As I’ve noted elsewhere in the blogosphere, too often computer technologies are designed with the designers—and not the users—in mind.

To claim that the current uses of computers is lib—as in liberating—is therefore a fib, as in a small or trivial lie. The potential for liberation and intellectual creativity is there, but for too many of us who are hampered by poorly designed computer hardware and software, the potential has not yet been achieved.

I ate lunch with Douglas Engelbart!

In 2001, I ate lunch with Doug Engelbart—the guy who invented the mouse. Too bad I didn’t know anything about him at the time.

I was attending the annual meeting of my professional association, the Society for the History of Technology, in Pasadena, California. Turned out that Mr. Engelbart was an invited guest speaker at the conference, and he planned to speak later in the day when I first encountered him.  I spied this person whom I didn’t know (my society is small, and I know most of the folks who come to the meetings), and he was seated at a table by himself eating lunch.  I felt badly that he didn’t have any company, so I sat across from him and started talking.  He was quite friendly and modest, and while I asked him about himself, he didn’t tell me much.  Only later did I realize that I had been talking to a legend in new media history.

Had I known more, perhaps I would have asked him if he believed whether he had realized his goal of making life less complex through the use of computers. To be sure, he devised technologies and techniques that made it much easier for the average Joe (and even the unaverage Josephine who works professionally with computers) to interact with machines and to create new tools.  Even in the world of high-tech, have his tools made things less complex?

One can certainly point to examples in which his vision has come true, such as the devices we use to retrieve information and do repeated tasks. And of course, who would give up his or her word processor (with the graphical interface we now take for granted) for a typewriter?

But in some cases, the use of computers has made people think they can do complex things more easily when, in fact, they can’t. Consider technologies such as nuclear power plants, which are inherently complex and which sociologist Charles Perrow says is one of several technologies that are essentially unknowable.  Worse than that, they cannot be designed to avoid having accidents.  In fact, they are destined to have, what he calls, “normal accidents.”

Normal accidents occur in systems in which “the parts are highly interactive, or ‘tightly coupled,’ and the interaction amplifies the effects in incomprehensive, unpredictable, unanticipated, and unpreventable ways.” (Charles Perrow, “Normal Accident at Three Mile Island,” Society 18, no. 5 (1981): 17-26; also see Charles Perrow, Normal Accidents: Living with High-Risk Technologies [New York: Basic Books, 1984].)  He argues that no human or computer can anticipate all the interactions that can possibly occur in such a system, leading to inevitable accidents.  Many of these accidents already have occurred, and some with tragic consequences, such as at the Three Mile Island nuclear power plant in 1979, at the Bhopal petrochemical plant in 1984, in electric power systems (which collapsed in parts of the US in 1965, 1971, and 2003), and so on.

While one can quibble with some of Perrow’s arguments, he suggests persuasively (in my mind, at least) that no matter how one may try, it’s not likely that humans can understand the consequences of every interaction of a large number of components in a system. Even the fanciest computer needs to be programmed by a human being, and that human can’t imagine every way in which a physical system’s components may intensify a mistake or defeat the best efforts of a human operator.

So, Doug, I wish I knew what I do now about your work so we could have had a more engaging talk 13 years ago. I take the blame for my ignorance.  Sorry about that.  Let’s hope, though, that when I eat lunch with Charles Perrow next time, I’ll be able to ask him about whether he thinks your work has made technology less complex and less prone to screw up.

Cybernetics, symbiosis, and my messy room

When I was a teenager in the 1960s, I read a book on the novel topic of cybernetics.   Designed for young readers, the book foretold how computer-driven machines would allow us humans to do wonderful things in the near future.  I looked forward to being able to do a lot more creative things in much less time, and I had hoped that the cyber-controlled devices of the future would do all the menial work (such as cleaning my messy room!) for me.

Alas, the future has not yet arrived.

When reading the articles by Wiener and Licklider (written in 1954 and 1960 respectively), I remain amazed that they anticipated how computers would soon hold vast amounts of information and be able to recognize speech.   And I appreciate Wiener’s concern that humans should be careful as they use these new machines and not idolize them.  (Apple products fans–pay attention to this guy!)

Even so, while I realize that the new cyber-machines allow me to do things that once took me a great amount of time (just think of all those trips to the library to find a single, and ultimately useless, article!), I’m not sure that I’ve become much more creative as a thinker or educator.  I can obtain information easier, but have the machines helped me transform it into knowledge, insight, or wisdom?

Maybe the cyber-machines would be more useful if the institutions in which I work  would allow me more time to use the devices creatively.  At my university, for example, the machines make it possible for administrators to push down to us faculty members tasks that used to be performed by others.  In the olden days, I recall handing off my final grades to a secretary, who passed the data to others, who entered them into the massive mainframe computer.  Now I enter the info into a PC myself.  And when I write grant proposals with colleagues, I am expected to find detailed information that previously was obtained by a lower-paid staffer.

Likewise, I waste hours of my life trying to make sure my cyber-machines have the latest software and do not contain viruses in them.  And how often have I spent hours trying to fix a computer glitch only to wish that I had left everything the way it was, seeing that the “fix” was worse than the original problem?

Maybe one big difference between the average user and folks like Wiener and Licklider is that they more intimately understood the machines with which they worked.  Wiener wrote that “If we want to live with the machine, we must understand the machine.”  But who understands the cyber-devices that we increasingly depend on? Consequently, when something goes wrong in a cyber-machine today, most of us need to find someone else to help us fix it, thus eating more time that could have been spent being creative.  (On the other hand, look at all the jobs that have been generated to serve us idiots!)

Don’t get me wrong.  I’d rather fight than give back my iPad or iPhone.  Still, the wonderful symbiotic relationship between humans and machines that Wiener and Licklider envisioned has not yet been realized, in my view.  And the nonrealization of the utopian symbiosis is not the result of inadequate hardware development.  Rather, it derives from the ways people created expectations of how we should use the time freed up by not having to spend so many hours plotting graphs by hand.

As is typical ofinteractions between people and machines throughout history, new technologies emerge and evolve within a social context.  Despite images of a life free from drudgery  (replaced by a life of leisure, contemplation, and creativity) that many of us expected when we read about cybernetics four decades ago, we still  spend too much time using the new machines to do menial stuff we don’t really want to do.

And worse than that, my room is still a mess!

V. Bush and the role of science in making new technologies

Vannevar Bush has rightly been credited as having marshaled American resources in science and technology to help create weapons (such as radar and the atomic bomb) that proved vital in World War II.  He translated those insights into his book, Science: The Endless Frontier, and fostered thinking in the policy realm that was simple and direct.  It was also wrong.

His book argues that the government should continue funding research efforts in science since new knowledge of natural phenomena will naturally lead to the development of new technologies.  He created an assembly line model of technological innovation: science comes in one door of the factory and it leaves as a spanking new technology through another.  The whole idea just made so much sense that it had to be true.

Several studies after World War II, some done by the National Science Foundation (which Bush helped create), have shown that science is but one input into the creation of new technology.  And in many cases, it isn’t even the most important one.

The development of the steam engine perhaps serves as the best counter-example of his model.  In 1712, Thomas Newcomen created the first commercially successful steam engine, and James Watt improved upon his design in 1769.  Obviously, these guys must have known a lot about thermodynamics, the science of heat and energy, just like they teach us in elementary physics textbooks today.  (I paid my way through grad school as a physics teaching assistant, so I’ve read my share of these textbooks.)   But wait a second; the development of  “modern” theories of heat and energy only came in the work of Sadi Carnot in 1824, more than a century after Newcomen had started selling his machines.  Moreover, Carnot wrote that he was motivated in this work because those crafty Brits had developed such a competitive advantage over the French in engine development.  He wanted to find a way to catch up to the British engine designers using some theory, which he started establishing.  In other words, the existence of a technology spurred the development of a field of science–not the other way around.

To be sure, some people have been motivated by science.  Rudolf Diesel actually sought to make a super-efficient internal combustion engine using the principles that Carnot devised.  But he quickly failed in this effort, though he ended up creating the machine that bears his name and which proved a huge commercial success nevertheless.

And from which scientific principle did the microwave oven come?  Which scientist developed it from scratch using sound scientific logic?  No one!  A self-educated engineer, Percy Spencer, apparently was working with a radar unit in 1945 and noticed that a candy bar in his pocket melted.  He ultimately traced the cause of the heating to the microwave beam in the radar equipment.  Eureka!  The microwave oven was born (even though the first ones were huge.)

In other words,  one can point to elements such as economics, markets, and plain luck as important inputs into modern technologies.  If you think of medicine as a form of technology, just consider how many new drugs have been created because someone accidentally noticed a connection between the ingestion of some natural substance and a positive medical outcome.  (For example, some tree bark relieves pain and headaches.  Other bark-derived substances cure cancers.)  Drug companies today systematically test the pharmacological effect of random substances to discover marketable substances since the scientific basis for many biological pathways simply doesn’t exist.

I’m not trying to detract from Bush’s ability to think creatively about how scientists could remake the modern world after having won World War II (or almost had when his article,  “As We May Think,” was published in July 1945).  He had great foresight and thought imaginatively about  new things people could one day do with hardware that enabled people to store information and enable rapid retrieval of it.  But he often gave too much credit to the scientists–those logical thinkers who knew how to go from point A to point B.  New technologies may sometimes benefit from such thinking, but often good luck and nonscientists’ perceptive behavior work just as well in spurring new technologies.

 

First experience with blogging and NLI class

I like to think of myself as pretty tech savvy.  However, I am increasingly finding that people who set up websites or who write instructions for doing so expect users to know as much about the hardware and software as they do.  As I created my new (and first ever) blogging account, I was asked questions about things I had no idea about (such as levels of privacy, etc.).  Such sites need to provide explanations about the different choices and their consequences.  (I was told, for example, to chose my blog name carefully, since I could never change it!  But where does that blog name get used?  Why is it so important that I choose it carefully?)  I’ve never blogged before, but it doesn’t appear that the designers of the site imagined that such a neophyte could exist.

As I’ve encountered several poorly created sites and services on the Web (and not just at VT), I’ve become concerned about the philosophy apparently held by many tech people: that we users need to adapt to technology rather than having technology designed so it suits the needs of users.

I have no idea whether this post will show up where it’s supposed to (on the NLI site).  We shall see.