Stretchtext: Ted Nelson’s Totally Cool Idea

The excerpt for this week from Ted Nelson’s Computer Lib/Dream Machines, first published in 1974, was in many ways a breath of fresh air.  While the passages from Bush, Engelbart, Licklider, and Weiner were all  interesting and enjoyable in their own right, they lacked the decidedly playful spirit and joyful ebullience of Ted Nelson’s writing.  I knew immediately from a quick glance at the cover that this was a totally different time and this was a very different guy.  Clearly not an engineer, scientist, or a mathematician, like our previous authors, but someone who was broadly educated and read, had a wonderful sense of humor, and was not just celebrating the potential of the machine but also critiquing it and the larger society in which it was embedded.  For example, his gripes with our educational system in Computer Lib seemed spot on, perhaps even more so today, with increasing levels of assessment driving nearly every moment in the K-12 classroom, slowly strangling every ounce of curiosity of our children and treating them like empty pitchers into which knowledge can be poured (and then regurgitated back out, on command).  Nelson clearly inhaled deeply in the anti-establishment haze that pervaded American society in the rambunctious, rebellious ’60s and ’70s.  And given the appearance of the book’s pages, with their drawings and handwritten text, I was not too surprised to learn that he was associated with Stewart Brand and the Whole Earth Catalog, one of the quintessential counter-cultural documents of this period (and incidentally, one I spent quite a bit of time perusing when I was a young teenager, as much for its frank, hippy dippy discussions of sexuality as its promotion of creative lifestyles, an environmental ethos, and alternative technologies).

That is not to say, however, that Nelson did have a mean geek streak as well.  While he was not an scientist or engineer by training, Dream Machine is populated with a variety of creative new ideas for using the untethered computer workstations that were just coming on the market at the time.  Nelson’s keen sense of the importance of user-friendly hardware and software (though I don’t think he used either of those terms himself) pervades the book.  And some of the particular “dreams” he presents are awesome.  Take, Stretchtext, for example.  This is a form of imagined hypertext that would allow the user to access condensed or progressively expanded versions of a given text, using a throttle to make it longer or shorter “on demand.”  So, a reader could use a condensed version of the text to get the gist of a given paragraph, page, chapter, or book, and then expand it as needed whenever he or she wanted more detail.  How cool is that?

Clenched Fist

Ted Nelson’s Computer Lib / Dream Machines reading is among my favorites in the New Media Seminar.  As Claire notes, even for the connoisseur nugget-searcher, this selection, and especially the “Dream Machines” section,  abounds in provocative, compelling morsels.  I’m going to just note one for now: “I believe computer screens can make people happier, smarter, […]

2013: Are we there yet?

This week, for the New Media Seminar, we read Ted Nelson‘s 1965 Complex information processing: a file structure for the complex, the changing and the indeterminate (sorry the latter link is behind a paywall). I don’t have much time to write on it, but the main thing that struck me is how far we’ve come. In this piece, Nelson outlined how to think about structuring files to be useful to people in creative and other pursuits. He built on Bush’s idea and in some was was more specific about how to accomplish the feat.

We’re certainly not there yet. I still battle a lot of the organizational issues in my own files that Nelson described. And to some extent, having both paper and digital makes it much messier than just one or the other. But we’re closer. Here are some tools I couldn’t help but thinking about in his piece, with associated quotes:

“As long as people think that [computers are useful only for scientific and corporate tasks], machines will be brutes and not friends, bureaucrats and not helpmates. But since (as I will indicate) computers could do the dirty work of personal file and text handling, and do it with richness and subtlety beyond anything we know, there ought to be a sense of need.”

Downright Jobsian, no?

“If a writer is really to be helped by an automated system, it ought to do more than retype and transpose: it shouold stand by him during the early periods of muddled confusion, when his ideas are all scraps, fragments, phrases, and contradictory overall designs. And it must help him through to the final draft with every feasible mechanical aid–making the fragments easy to find, and making easier the tentative sequencing and juxtaposing and comparing.”

Which, to me, is clearly Scrivener. I love Scrivener, and have used it every time I have to write a book-length manuscript. It’s not great for collaboration, but it’s exactly what Nelson describes for one’s own work.

“Consequently the system must be able to hold several–in fact, many–different versions of the same sets of materials.”

Both DropBox and Google Drive offers this functionality. Thank goodness.

“Remember there is no correct way to use this system.”

Throughout the whole article I kept thinking of Evernote. I am a heavy user of Evernote and it’s exactly this collecting of snippets and connections. It holds images, audio, text. I keep hoping for video. But the main challenge for most people with Evernote is the lack of clarity of how to use it. It’s so big and open ended it’s hard to know how you can best make use of it. The ELF system in this article is similar.

“Note that in such uses it is the man’s job to draw the connections, not the machine’s.”

And this is where it all changed for me. In our systems, the machine does a lot of the work. Just consider Google Now for example. It’s amazing: presenting just the right information at just the right time. But also creepy.

The article was full of statements that resonated with today’s experience: the need to be able to undo deletions, the lack of importance for seeing file structure in some cases, etc. I’m looking forward to the seminar discussion today!

Dashboard knives and Procrustean beds

To create my blog entries my typical MO to date has been to cull phrases I found provoking, intriguing, or delicious from the reading into a post draft and craft a narrative based on the results of my truffle-hunting. That method was telling this week because I wanted to pull vast swaths of the excerpt from Computer Lib/Dream Machines by Ted Nelson into the new post window. And that was the case because I overwhelmingly agreed with him and found his writing style delightful. Many of his points were consistent with my previous assertions here — especially that we in the laity need to practice mindfulness, advocacy, and forward-thinking. While I collected a plethora of gems from his writing, I’ll share only two that have particularly stayed with me since I began composing this post. First:

To me this seems like a beautiful example of what happens when you let insulated technical people design the system for you: a “kill” button on the keyboard is about as intelligent as installing knives on the dashboard of a car, pointing at the passenger.

What an image. The point here being that systems should be designed for the people who use them rather than the convenience of the developers who program them (see, also, his wonderful discussion of the Procrustean bed). And then:

I think that when the real media of the future arrive, the smallest child will know it right away (and perhaps first)…when you can’t tear a teeny kid away from the computer screen, we’ll have gotten there.

So: kids and iPads or other gaming systems. Does that mean we’ve “gotten there”? I think I’d argue that while I found Nelson prescient and spot-on about most things I’m not totally sure about this point (see my last post, for example). Just because something is immersive doesn’t exactly mean we’ve gotten what we “want”, does it? Or am I misinterpreting his point and we ARE there, but we’re not using these media effectively yet? And, assuming we are “there” in some sense, is the actuality really desirable in the sense Nelson predicted? Hmmm.

Snail to Cat?

Yesterday in the seminar we kicked off our discussion of Douglas Engelbart’s work with a tribute video featuring interviews with Engelbart and footage from a conference commemorating the fortieth anniversary of the “Mother of All Demos” in 2008. Hearing Engelbart, who just passed away in July, talk about his life’s work and hopes for the […]

Intellectual limits

After reading Engelbart’s framework, I started wondering whether augmentation through new media/technology actually negates the need for people to pursue greater natural intelligence.  A little Googling led me to a 2011 article in Scientific Americanwhich claims that our brains have all but reached their evolutionary potential.  What was particularly interesting was the ways that the author, Douglas Fox, determined we may continue to develop intellectually–if not biologically:

The human mind, however, may have better ways of expanding without the need for further biological evolution. After all, honeybees and other social insects do it: acting in concert with their hive sisters, they form a collective entity that is smarter than the sum of its parts. Through social interaction we, too, have learned to pool our intelligence with others.

And then there is technology. For millennia written language has enabled us to store information outside our body, beyond the capacity of our brain to memorize. One could argue that the Internet is the ultimate consequence of this trend toward outward expansion of intelligence beyond our body. In a sense, it could be true, as some say, that the Internet makes you stupid: collective human intelligence—culture and computers—may have reduced the impetus for evolving greater individual smarts.

This focus on “collective human intelligence” is particularly interesting in light of Engelbart’s focus on the individual.

Organizing Information: Arranging our World

Reading from Engelbart’s, Augmenting Human Intellect, for this week’s seminar, I’m reflecting on my own research group’s current thinking. Consider the following summary of a grant we were just awarded (before the shutdown!) from NSF,

“In a short period of time, computerization has moved from providing a counterpoint to life, with the potential to highlight and shade experience, to constituting a constant force, almost defining our experience of life. A core part of human intelligence lies in how we arrange our world. If computer systems are central in our interactions with other people and institutions, those systems must: (1) allow us to arrange them so that we are more likely to act as the selves we wish we were; (2) help us understand whether people and institutions are treating us as we ought to be treated; and (3) create and encourage reflective opportunity about these matters. Our larger goal is to pursue the reflective opportunity design space through creating designs that prioritize seams in interaction and allow people to nudge one another and themselves in particular directions.”

  • Deborah Tatar

While I think it was Engelbart’s intention to express only some examples of possibility, it seems that many have taken his examples as prescription for what should come. While certainly a system such as he suggests in the reading can be helpful, part of its utility was the way it was introduced to the user: with sharp contrast to previous ability emphasized by the juxtaposition of the user’s attempt at a task and his demonstration of the same task with the tool. If, however, such systems were to be ubiquitous, and an “augmented native” were to be observed, would they appreciate the power of the new system?

I don’t think it’s the case that new generations suffer through the toils of all their predecessors, but part of the power, and liberation of this system, was just that it is not the previous system. So I take issue with directions of research which might uncritically adopt these examples as specifications. Instead I am with my advisor (Deborah) in thinking that contrary to the buzzwords and trends in our field of Human-Computer Interaction, we must design our applications specifically to have some seams. Seamful, rather than seamless, interactions, “create and encourage reflective opportunity about” issues of ethics in our interactions with others and other institutions.

Bridging the gap between “training” aspirations and outcomes

Douglas Englebart began his essay “Augmenting Human Intellect: A Conceptual Framework” (1962) with an explanation of what he meant by the first phrase:

By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. 

Engelbart’s operationalization of this phrase is interesting because the de facto assumption is that we all arrive on the planet capable of augmentation, and the degree of our subsequent augmentation is largely based upon training. This is made clear in Engelbart’s discussion of the H-LAM/T system, which stands for “Human using Language, Artifacts, Methodology, in which he is Trained”. The specifics of this system are less important for my purposes than Engelbart’s fundamental expectation that training is the mechanism by which we are able to bring all of our innate, evolved, or created augmentations into effective and efficient practice. Engelbart asserted in his report that, for example, one would have no expectation that a person who had never encountered the concept of a car would be able to effectively drive a vehicle and engage in the complicated negotiation of traffic, driving regulations, and so forth, but that it was perfectly possible to incrementally train this person to do so over time. Similarly, in the latter section of his report, Engelbart provides a hands-on visualization exercise from the viewpoint of the reader wherein the reader sits down with a researcher and is trained to think about the process of documentation and referencing differently in order to make the most effective use of a hypothetical computer.

This concept of training being the key to augmentation (regardless of medium) is a crucial point when we think about the incorporation of new media into education. We’re all fundamentally capable–we begin as beings wired for augmentation and grow that ability over time–yet this leads inevitably (for me, anyway) to the conclusion that certain approaches to training are going to be far more effective than others at unlocking our ability to augment ourselves. Thus why I bring new media in the form of educational technologies into the mix. Unlike traditional human language, which we’re fairly well-wired to absorb like sponges at this point, the use of modern technological artifacts isn’t, per se, something that we evolved to be good at. Death by PowerPoint  is a common complaint amongst students and working professionals alike. We have tools that we often use in an attempt to train, but we don’t always use them in a way that actually augments the intellect of others. If we are going to use educational technologies as training mechanisms to augment the intellect of others, it’s important that we do so effectively. We want our audience to walk away with comprehensive understanding rather than the superficial grasp of the documentation concept achieved by Engelbart’s reader before the researcher pushed the reader to really apply critical thinking to the task at hand (rather than going off of the standard operating model of the time period).

A great example of this conundrum was covered by The Atlantic in a piece by Phil Nichols called “Go Ahead, Mess With Texas Instruments: Why educational technologies should be more like graphing calculators and less like iPads. An Object Lesson“. I was intrigued by the assertion in the title that an antiquated educational technology could be better in the classroom than an iPad, of all things. After all, iPads can access and beautifully display an almost infinite trove of knowledge via applications. How could a TI-83 possibly compete with that?

The answer lies in the calculator’s ability to be programmed. The author’s assertion is that students are able to learn actively, incrementally, and independently on these calculators (thus augmenting their intelligence) because they can code their own programs. One might ask “what about coding applications?”, which we discussed in class last week as an example of how the average user can become involved in the evolution of technologies. But that’s not truly an egalitarian or accessible option, especially for the average classroom, because, as Nichols observes:

Where Texas Instruments graphing calculators include a programming framework accessible even to amateurs, writing code for an iPad is restricted to those who purchase an Apple developer account, create programs that align with Apple standards, and submit their finished products for Apple’s approval prior to distribution. As such, for the average student, imaginative activities on an iPad are always mediated by pre-existing apps and therefore, are limited to virtual worlds created by others, not by students themselves.

I think this is a fascinating point. While Nichols is primarily focused on the K-12 sector in this discussion, it clearly applies to higher education as well. iPads are fantastic for some training purposes, but I buy Nichols’ argument that they have pitfalls when it comes to developing engaged learners. I seem to keep harping on this in my entries, but I think this is just a twist on the idea of mindful engagement. We have found a host of ways in which to augment the intellect through training using educational technologies, but the question of whether we are really accomplishing our aims is one I will clearly be revisiting in different iterations throughout this course.

Understanding the machine

Last week’s VCU’s New Media Faculty-Staff Development Seminar took up two related but also quite distinct essays: Norbert Wiener’s “Men, Machines, and the World About” and J.C.R. Licklider’s “Man-Computer Symbiosis.” Aside from the regrettable (but understandable) androcentric language, both essays are forward-looking, yet in different ways. Each of them understands that human history moves in the direction of greater complexity, especially in the accelerating streams of technological innovation and invention. (Wiener wrote a whole book on the subject of invention, one well worth reading, though it was not published until years after his death.) Both writers write about machines, systems, and human-machine interaction. Both writers emphasize that the computer is a new kind of machine. Wiener writes of a “logical machine” with feedback loops, and Licklider emphasizes the “routinizable, clerical” capabilities of the computer. Although neither one uses the magical phrase “universal machine” that Alan Turing uses, they both seem to understand that a difference in degree (speed, memory) can mean a difference in kind. Wiener also writes of “the machine whose taping [i.e., programming] is continually being modified by experience” and concludes that this kind of a machine “can, in some sense, learn.” Such machine learning, and research into its possibilities, is going on all around us today, and that pace too is accelerating. (Google Translate is but one example. Notice that it keeps getting better?)

Part of the experience computers learn from, of course, is our experience–that is, computers can be made and programmed so that they adapt to (learn from) our uses of them. It was hard to see this happening in the pre-Internet era. We could customize various things in DOS, and on the Macintosh, and on Windows (yes, even on Windows), but we didn’t have the feeling of the computer adapting to our uses. For that phenomenon to become truly visible, we needed the World Wide Web and cloud computing. (If you see an unidiomatic translation in Google Translate, click on the word, and Google Translate gives you the opportunity to teach it something.) The computer that learns from us most visibly is the computer formed of the decentralized, open, ubiquitous Internet, as that medium is harnessed by various entities. The most powerful application ever deployed on the Internet, the platform that enabled the macro-computer of the Internet to become visible and self-stimulating, is the World Wide Web.

Which leads me to my point, one already made more elegantly by Michael Wesch (see “The Machine is Us/ing Us“), Kevin Kelly, and Jon Udell, among many others. As we publish to the Web, purposefully and variously and creatively, we also make the Web. This is also true on the micro scale of personal computing, deeply considered, but we see the effects most powerfully at the macro scale of networked, interactive, personal computing enabled by the World Wide Web. The Web, freely given to the world by Tim Berners-Lee, is a metaplatform with the peculiar recursive phenomenon of unrolling before your eyes as you walk forward upon it. It is a world that appears in the very making–assuming, of course, that you are indeed a web maker and not simply a web user.

Wiener writes, “If we want to live with the machine, we must understand the machine, we must not worship the machine…. It is going to be a difficult time. if we can live through it and keep our heads, and if we are not annihilated by war itself and our other problems, there is a great chance of turning the machine to human advantage, but the machine itself has no particular favor for humanity.” If the machine is us, however, as Michael Wesch argues (and in the case of the machine of networked, interactive, personal computing on the World Wide Web, I agree), then Wiener’s statement reads like this:

If we want to live with ourselves, we must understand ourselves, we must not worship ourselves…. It is going to be a difficult time. If we can live through it and keep our heads, and if we are not annihilated by war itself and our other problems, there is a great chance of turning ourselves to human advantage, but we ourselves have no particular favor for humanity.

The idea of enlarging human capabilities should make us nervous, I suppose, but it’s a step forward to understand that that is what we’re thinking about, and that is what’s uniquely empowered and enlarged by interactive, networked, personal computing. From art to medicine to engineering to business and beyond, one capability we have and share, to an alarming and exhilarating extent, is a capability for enlarging our capabilities. Computers are an interesting manifestation of that capability, and a powerful means of using (exploiting, unleashing) that capability. As is education. (Schooling? Depends on the day and the school and the teacher.)

Once we understand that, deeply, we may to Poincare’s observation, quoted by Licklider: “The question is not, ‘What is the answer?’ The question is, ‘What is the question?’”

Licklider dreamed of using computers to help humans “through an intuitively guided trial-and-error procedure” to formulate better questions. I am hopeful that awakening our digital imaginations will lead us to formulate better questions about our species’ inquiring nature and our very quest for understanding itself.

design for adaptation

Do we design the new media or does it design us?  Perhaps regardless of the answer, how do we take advantage of this evolution to better suit our goals while we spend time on this planet?