“Plight” is an interesting word. We are in a plight, meaning we’re in a tangle, a mess, a terrible fix, with “fix” itself an ironic noun in this context. Yet we also plight our troth, meaning “pledge our truth.” Plight-as-peril and plight-as-pledge both come from an earlier word meaning “care” or “responsibility” or (my favorite from the Oxford English Dictionary) “to be in the habit of doing.” Along a different etymological path, we arrive at the word meaning to braid or weave together. The word “plait” is a variant that makes this meaning more explicit. It’s not too far into poet’s corner before weaving, promising, and care-as-a-plight become entangled, at least in my mind, and perhaps usefully so.
The first McLuhan reading in the New Media Faculty-Staff Development Seminar is from The Gutenberg Galaxy, specifically the chapter called “The Galaxy Reconfigured or the Plight of Mass Man in an Individualist Society.” I don’t know if McLuhan is punning here, but it’s not implausible that the man who coined the term “the global village” and paid special attention to the role of mediation in human affairs–mediation considered as extensions of humanity–might think not only about the plight we find ourselves in but also the plighting of troth we might explore or co-create or braid.
The trick (and McLuhan is nothing if not a trickster, as others have noted) is that the plighting cannot be straightforward or “lineal,” lest it not be a genuine pledge or an authentic weaving. His very writing is obviously a plight for many readers, but it’s also a brave (and sometimes wacky) attempt to do a plighting of the plaiting kind as a sort of pledge of responsibility. He writes these stirring words for our consideration:
For myth is the mode of simultaneous awareness of a complex group of causes and effects. In an age of fragmented lineal awareness, such as produced and was in turn greatly exaggerated by Gutenberg technology, mythological vision remains quite opaque. The Romantic poets fell far short of Blake’s mythical or simultaneous vision. They were faithful to Newton’s single vision and perfected the picturesque outer landscape as a means of isolating single states of the inner life.
From which I draw these conclusions regarding McLuhan’s argument (or plighting):
1. “Lineal” does not mean “synthesized” or “unified.” The straight path or bounded area leads only to fragmentation and reduction. It is not a weaving and cannot be. The lineal and the fragmented are perilously broken promises.
2. Mythological vision is a technology for enlarging awareness of complexity. Mythological vision is both plighted-woven and a means for plighting-weaving.
3. Fragmented, lineal awareness invents technologies of self-propagation that reinforce more lineality, more fragmentation, while giving the illusion of doing quite the opposite. Single-point perspective is not the same as a unifying vision or a simultaneous awareness of a complex group of causes and effects. It is, instead, reductive while pretending to be unified.
4. Even self-consciously or self-proclaimed liberatory movements such as Romantic poetry (or any number of other such apparently radical departures) may quail before the complexity and simply reinscribe a slightly shifted set of boundaries, thus perpetuating a reduction of complexity and a lack of awareness that dooms our technologies to reproducing our failures.
Last week in our New Media Faculty-Staff Development Seminar, Nathan Hall (University Libraries) and Janine Hiller (College of Business) teamed up to take us through the Alan Kay / Adele Goldberg essay “Personal Dynamic Media.” Janine and Nathan took an inspired approach to their task. Nathan’s a digital librarian, and he brought his training and interest in information science to bear on Kay and Goldberg’s ideas. Janine’s work is in business law, so intellectual property would have been a logical follow-on for discussion. But wily Nathan segued into wily Janine’s swerve in a direction that in retrospect makes perfect sense but at the time came with the force of a deep and pleasant surprise: the information science of metaphor.
As I look back on the session, I have to admire the very canny way in which the info science/metaphor combination acted out the very nature of metaphor itself: the comparison of two unlike objects. Having made the comparison, of course, one begins to see very interesting disjunctions and conjunctions. The mind begins to buzz. Wholly novel ideas emerge, such as the metamedium of the computer being like a pizza. Seriously.
Janine shared with us a lovely TED video on metaphor …
… and challenged us in small groups to come up with our own metaphors for computing as a metamedium (think of them as seminarian family-isms). We very quickly got to pizza in our group, courtesy of the talented Joycelyn Wilson. (Amy Nelson riffs on that metaphor in her own blog post.) Another group found itself circling back, recursively but sans recursing (dagnabbit), to the powerful and complex metaphor of the “dream machine.” (Go ahead and revive that metaphor by thinking about it again. And again. Stranger than one might suppose, eh?) (Oh, and to get another link in, I believe it was 21st-century studies lamplighter Bob Siegle who led us there.) In our closing moments, we began thinking about metaphor as a metaphor for computing, and computing as a metaphor for metaphor. I do believe Alan and Adele would have enjoyed the conversation.
At the end, Nathan sketched out a continuum between the procedural and the conceptual/metaphorical that he had found in “Personal Dynamic Media.” At one end was the filing cabinet (cf. Memex, cf. info science). At the other end was the flute (a metaphor that Janine beautifully led us to unpack in our discussion). And then, a few minutes after the seminar was over and I was walking to the car, a connection appeared for me.
There is indeed an apparent dichotomy between filing cabinets and flutes, between quotidian documents and art, between the minutiae of our task-filled lives and the glorious expressive possibilities of musical performance, especially with an instrument like the flute (I am a mediocre but enthusiastic flautist) that one plays in such intimate connection with one’s body and breath. It’s simple, direct, a column of air that resonates within the instrument as well as within the hollow, air-filled spaces within one’s own face and chest.
What could be more pedestrian, ugly, and (depending on the tasks) repellent than a filing cabinet? What could be more liberating and beautiful than a well-played flute?
How is a raven like a writing-desk? Alice asks in Alice In Wonderland. The question is never answered. (Brian Lamb once answered it–”Poe wrote on both”–but alas his ingenuity came many decades too late for poor Alice.)
How is a flute like a filing-cabinet? The question makes even less sense. At least, at first.
But considered within the world of Alan Kay’s aphorism that “the computer is an instrument whose music is ideas,” I find myself inspired to think that one may indeed make a flute of a filing-cabinet, awakening and ennobling the detritus of our dreary records and messy operational details with the quicksilver music and responsiveness of a well-played flute.
What if we could bring that vision into our lives? Our learning? Our schools? What if our filing cabinets were less like the warehouse in which the Ark of the Covenant is boxed and lost, and more like thought-vectors in concept space sounding something like the music of the spheres?
It may not be as hard as we may think–unless we actually prefer meaninglessness and stasis to delight and melody.
As Hoagy Carmichael once wrote, “Sometimes I wonder.”
You’ve seen the ad copy. I have too. The hard sell for the soft, gentle learning curve promised for a new device is that the device is “intuitive.” That is, the device is easy to use because you can make the device do what you want because the interface design helpfully indicates how to operate the device. You want to save a file? Click on the icon. Of course, in MS Word (and MS Office generally) the icon is a floppy disk. One used to save files on floppy disks. They used to look like that, too–the 3.5 inch not-floppy diskette. Yes, this is getting complicated already. Let’s stop the cascade by admitting that “intuitive” means “familiar,” and that “familiar” itself is more of a moving target than we’d like to think. And there’s a Gordian knot for another time. (Recommended reading: “The Paradox of the Active User,” a major addition to my intellectual armamentarium courtesy of Ben Hanrahan, a wonderful student in last year’s “Cognition, Learning, and the Internet” course.)
So let’s move on. “Intuition” (home of the intuitive) can mean something much deeper than “I bet that’s how I can do that.” It can mean “I bet this device ought to be able to do that.” In “Personal Dynamic Media,” Alan Kay and Adele Goldberg tell the story of one such intuitionist:
One young girl, who had never programmed before, decided that a pointing device ought to let her draw on the [computer] screen.
This kind of intuition is a creative intuition that isn’t about “ease of use” or “I bet I already know how to do that.” It’s an educated guess, a contextual surmise, and a leap of faith. Note the fascinating language in this description. She decided(moment of agency and commitment) that a pointing device ought to let her.This kind of intuition is something like the belief in “Mathgod” that Douglas Hofstadter describes so winsomely in Fluid Concepts and Creative Analogies. It’s also (no coincidence) what Jon Udell keeps talking about when he talks about how people “don’t have intuitions” about the World Wide Web. To connect Kay-Goldberg with Udell, to have intuitions about the Web would be to decide that the Web (and the Internet that supports it) ought to let one do this or that–meaning, “given what this system is and what it supports, this thing I imagine or invent should be possible.” Note that you have to know something about what kind of a thing, or network, or web you’re working with. Indeed. But note also that the paranoia, hebephrenia, or catatonia induced by the many double-binds that formal schooling presents to learners are responses that pretty much guarantee that such intuitions will simply not develop. Try to imagine an entering class vigorously discussing among themselves “given the mission statement of our university, this thing I imagine or would like to invent with regard to my own learning ought to be possible. Feel your brain cramping in both hemispheres? Do students read mission statements? If they did, would they seek to shape their learning in terms of it? Do the structures we build to support what we say we intend, we value, we desire, actually stimulate any such activity? Exactly. Learners in formal schooling are not very likely, most of the time, to decide that school ought to let one do this or that related to learning. And if they try to make such a decision, based on such an intuition, they are often hammered back into line. Not always, but often. And any such repression is too much.
But here’s the third level, and it comes next in “Personal Dynamic Media”
She then built a sketching tool without ever seeing ours…. She constantly embellished it with new features including a menu for brushes selected by pointing. She later wrote a program for building tangram designs.
This level of intuition is the invitationist level. This intuition is an intuition not so much about the device per se but about the learning context, an ecosystem of device, peers, teachers, etc. Kay and Goldberg praise the young girl for building her own sketching tool “without ever seeing ours.” Another teacher might have said “did you do your homework? Did you consult the manual? Did you follow directions?” These are often important questions, but they miss the most powerful intuition engine of all: the invitation.
In “The Loss of the Creature,” an essay that articulates the paradox of the active learner with haunting precision, Walker Percy writes about the recovery of being, by which he means the recovery of the person as well as the recovery of the person’s experience. He believes both person and experience to be lost to “packages” which we simply “consume” with an ever-increasing anxiety that our consumption be certified as genuine by others. Worse yet, we become increasingly numb to our consumption, unaware that our souls are rotting from the inside out. As Kierkegaard observes and Percy reminds us, the worst despair is not even to know one is living without hope. No surface receiving our “cognition prints.” No mark of our learning or inquiry or existence left behind. We do not even think to ask.
Toward the end of the essay, Percy tells a story about two modes of experience, a story of music and being:
One remembers the scene in The Heart is a Lonely Hunter where the girl hides in the bushes to hear the Capehart in the big house play Beethoven. Perhaps she was the lucky one after all. Think of the unhappy souls inside, who see the record, worry about scratches, and most of all worry about whether they are getting it, whether they are bona fide music lovers. What is the best way to hear Beethoven: sitting in a proper silence around the Capehart or eavesdropping from an azalea bush?
However it may come about, we notice two traits of the second situation: (1) an openness of the thing before one–instead of being an exercise to be learned according to an approved mode, it is a garden of delight which beckons to one; (2) a sovereignty of the knower–instead of being a consumer of a prepared experience, I am a sovereign wayfarer, a wanderer in the neighborhood of being who stumbles into the garden.
A big house with a Capehart that looks like a casket ready for an embalmed Beethoven and his embalmed listeners. Or: a sovereign wayfarer in the neighborhood of being, and a garden of delight which beckons to one.
We need to work on our beckoning. Beckoning is what Bakhtin calls addressivity: the quality of turning to someone. From design to cohort to community and everywhere in between, especially in the schools that face our present times and equip us to invent our futures: how can we work on our invitations?
I have been following John Naughton ever since I found his book A Brief History of the Future in a secondhand bookstore in South Philadelphia in the fall of 2011. (My thanks to Kathy Propert for taking me there.) Naughton is Emeritus Professor of the Public Understanding of Technology at the Open University in the UK. He’s a blogger at the aptly named Memex 1.1, he’s Vice-President of Wolfson College in Cambridge, he’s an adjunct professor at University College, Cork, his latest book is the extraordinary From Gutenberg to Zuckerberg: What You Really Need to Know About the Internet, and he’s a crackerjack journalist for The Observer. This morning, Naughton’s blog linked to his latest Observer column, “Kicking Away the Ladder,” which concerns among many other things the persistent, pernicious error of confusing the Internet with the World Wide Web. Naughton explores why that error matters, in fact why it may be a fatal error, one that could mean the end of the “open, permissive” infrastructure that has allowed these extraordinary telecommunications innovations we’ve witnessed over the last few decades to grow and flourish.
The essay is essential and sobering reading. Please go read it now (it’ll open in a new tab). I’ll be here when you get back.
Is Naughton overreacting? Not at all. The danger is clear and present. And he knows his history, so Naughton understands well what we have gained from the Internet and the World Wide Web. He knows how they were made, and what principles animated and informed their design. And he knows what we stand to lose in the face of the strategies controlled by those who understand elementary facts about internet and computing infrastructure, history, and design, facts that far too many people are too incurious even to inquire after. These are elementary facts. They are not difficult to understand. Their implications take a little more work to get your head around, yes, but it’s nothing that a basic program in digital citizenship couldn’t address successfully–assuming that program was about how to make open, permissive use of the open, permissive platform. That is, assuming digital citizenship is about the arts of freedom and not simply the duties and dull “vocations” of compliance and consumption.
I read parts of the essay aloud to my dearest friend and companion, the Roving Librarian, and she asked me a great question: “So, if you had to explain the difference between the Internet and the Web, how would you do it?” And as so often happens in the presence of a great and greatly foundational question asked in the spirit of mutual inquiry and respect and love, a cascade of thoughts was triggered. (Not a bad learning outcome, that.)
Here’s what I have so far. It’s coming out quickly and will need much development, but I need to write it down now. I welcome your comments and questions and elaborations and collegial friendly amendments. (No blame should attach to the Roving Librarian, by the way, for any mistakes I make. Lots of credit goes to her, though, for anything that’s worthwhile.)
The Internet is about data transmission. It’s a network that enables any node to transmit any kind of data to any other node, and any group of nodes (any network) to transmit any kind of data to any other group of nodes. It’s a network and a network-of-networks. It thus engages, stimulates, and empowers data exchange that’s one-to-one, one-to-many, many-to-many, and many-to-one. As Naughton points out in another essential essay, this structure permits unique and disruptive emergent phenomena, some of which will be disturbing and harmful, some of which will simply be puzzling or appear irrelevant (or be denounced as such), and some of which will be enormously beneficial. Naughton is not alone in his explorations. Clay Shirky indefatigably points out the enormous good that we can derive from the Internet. He points out the dangers, too, but when people call him names, they call him a “techno-utopian,” which as far as I can tell means he remains hopeful about our species’ powers of invention. Joi Ito, director of MIT’s Media Lab, emphasizes over and over again that the Internet is not so much a technology as the technological manifestation of a system of values and beliefs; not a technology, but a philosophy.
To summarize, then: the Internet permits open data transmission one-to-one, one-to-many, many-to-many, and many-to-one. Seems clear enough. And that Clay Shirky talk on social media and revolutions I linked to above makes my point very vividly and clearly. (In fact, I learned to explain things this way from Shirky, from his blog and his two books Here Comes Everybody and Cognitive Surplus and in other venues as well.)
So, then, how is the Web different from the Internet? Naughton says that it’s an application that runs on the Internet. The innovation Tim Berners-Lee brought into the world about a decade prior to the turn of the century could not have been imagined or built without the open, permissive foundation that the Internet was designed to be.
But then comes the logical next question: how then is the Web significantly different from the Internet, aside from providing a layer of eye candy that makes the Internet more appealing and the metaphor of a “page” that makes the Internet seem more familiar? Gregory Bateson says that a unit of information may be defined as a difference that makes a difference. So what difference does the difference of Web make?
If I can’t answer that question, then no explanation of the difference matters, because if fails the “so what?” question.
The link allows us (and once we’ve seen it happen, it invites and entices us) to construct a thought network out of (upon, within, on top of, emerging from) a data network. That’s all, and that’s everything. It is the essential move that turns sensation–a matter of data transmission along nerve fibers–into what, given enough interconnections and enough ideas about interconnections, becomes cognition, a level-crossing connectome out of which abstractions, concepts, and conceptual frameworks will emerge.
The Internet passes data agnostically (video, text, audio, whatnot) and the Web allows us to create conceptual structures out of data by means of simple, direct, open, thoughtful, permissive linking. The linking is idiosyncratic, like cognition, but like cognition, it is not merely idiosyncratic. The linking is never random–human beings can’t be random–though it may be surprising or the relation may be obscure (at first). Some sets of links are more powerful than others, but none is as powerful as the very idea of linking, just as the most powerful concept we have is the notion of concept, something I delight in exploring with students and colleagues when we get to Engelbart’s “Augmenting Human Intellect: A Conceptual Framework.”
The Internet transmits information. The Web enables (stimulates, encourages) a set of connections that, from the first link to the enormous set of links we now experience, symbolize ideas about relationship.
The Internet permits the pre-existing connectomes within each mind and among many minds working together to pass their nerve impulses freely along a meta-set of data connections, a network of networks, an Internetwork. The Internet is a protocol and a foundation for the data transmission that enables communication considered as information transmission.
But this is only the beginning, an open, permissive, and thus powerful light-speed beginning. The next advance occurs when information transmission can be made into a foundation for sharing not just perception but experience, for sharing not just neural connections but the experience of cognition that emerges within each mind. And that level of sharing means not just sharing information but empowering and stimulating new ways of creating and sharing meaningful structures of information. (More Engelbart here, obviously.) The link is not merely a link, but a concept that enacts itself–as concepts do when we build them, and build on them.
To sum up:
The Internet is like sensation or at most perception.
(A crucial first step, and we could have gotten a network that allowed us to look at only a few things in a few ways, a walled garden a la Facebook. Instead we got something open and permissive, like a neural network of small pieces loosely joined whose emergent power emerged from the possibility of connection, not from strict specialization or over-particular design. More like cells and atoms, in other words.)
The World Wide Web is like perception leading to thinking.
(It’s like making concepts. Here Vannevar Bush missed an opportunity that we’d need a Doug Engelbart to explore. What Bush described as “associative trails” are not a mere search history. They are links, yes, but links that reveal conceptual frameworks, that symbolize conceptual frameworks, that stimulate conceptual frameworks. They are not merely a scaffolding–though to be fair, Bush does describe the scaffolding in rich ways that probably do rise to the level of what I’m talking about here. The links are fundamentally social both in the intracranial sense–the connectome in my head–and the intercranial sense–built out of the social experiment we call civilization, and returning to it as another layer of invention and potential.)
The foundational commitment in both the Internet and the World Wide Web is the same: both are built as “open, permissive” structures (to use Naughton’s words). These structures are not unlike the distributed (neuroplastic) design of the brain itself, one that, as it happens, permits all the higher orders of cognition to emerge, higher orders built of “adjacent possibles” and “liquid networks” that in turn enable even higher orders of cognition to emerge. From this open, permissive, distributed structure emerges our distinctiveness as a species. And our links within the World Wide Web enact this emergence, represent this emergence, and thus stimulate further emergent phenomena as we create and share even more powerfully demonstrated ideas about shared cognition.
The Internet is like sensation. The World Wide Web is like thinking.
The Internet transmits data of all kinds: text, images, sounds, moving pictures, etc. The World Wide Web is a newly powerful word (or medium of symbolic representation, or language) that allows us to imagine and create newly powerful n-dimensional representations of the n-dimensional possibilities of “coining words” (making and realizing representations) together.
A foundational commitment to an open, permissive architecture of creation and sharing enables the next layer (species, experience) of complexity and wonder and curiosity to emerge. This open, permissive architecture enables both the cognoplasticity of individual minds and the shared thinking and building that enables the macro-cognoplasticity of civilization.
There’s a fractal self-similarity involved that makes it difficult to tell Internet from Web, just as it’s sometimes hard to tell where I end and where you, or my history, or my friends, or my reading, begin. (Bakhtin’s “Speech Genres” maps these complexities most wonderfully–definitely worth extending your cognoplasticity in that direction, dear reader, with Professor Martin Irvine‘s fine guide as a beginning.) But the difference is there, and it is vital. I suspect the problem is that the difference is not well conceptualized because the conceptual framework rarely rises beyond, or in a different direction from, the technical distinctions. But then technical distinctions are rarely explored in ways that reveal the conceptual frameworks they represent and stimulate–hence Naughton’s frustration as well as the importance of his observations.
Now let’s connect these ideas to Bruner and his ideas in Toward a Theory of Instruction, ideas that influenced Alan Kay and other learning researchers who helped to envision and build the personal, interactive, networked computing environment we now live within with varying degrees of openness and permissiveness.
In Notes Toward a Theory of Instruction, Bruner distinguishes three levels of communicating (and thus three paths to learning):
1. The enactive: we communicate by doing something physically representational in view of others. If we want water, we mime the action drinking or lapping up water, and do so in the presence of others whom we believe might relieve our thirst.
2. The iconic: we communicate by pointing to something that materially represents at one remove, while still being physically connected to, the thing we mean or seek to draw attention to. Instead of miming the act of drinking, we might point to a cup or a water fountain, perhaps making a noise of some kind to indicate the degree of urgency we feel. This level is considerably more advanced than the first level because it entails a more sophisticated “theory of other minds,” a belief (supported by learning in a social context) that we can communicate shared experience directly through a shared locus of attention that does not directly connect to our physical bodies. To point to a cup indicates an experience of shared experience. To mime drinking almost gets there, but one might do this in one’s sleep as one dreams of drinking water. The enactive doesn’t necessarily indicate a theory of other minds–though miming drinking in the presence of someone whom one believes to be paying attention may approach the iconic and cross over to it, as when someone mimes drinking from a cup.
3. The conceptual (Bruner calls this “the symbolic,” but since it’s easy to confuse “symbol” and “icon,” I’ll use “conceptual” most of the time): we communicate by means of a set of shared concepts or abstractions. Here we don’t mime drinking, and we don’t point to a cup. We speak or write, “I am thirsty.” This is a wild and crazy thing, no? A set of squeaks and grunts. A set of ink marks (or pixel shadings). Words. Every one of those three words “I am thirsty” enacts, represents, creates, and communicates a state of enormous cognitive complexity that’s hidden from us because of our mastery. The familiarity cloaks the miracle. You can’t drink the word “water,” but behold, the word may bring you what you desire, or cause you to help another human being. (Obviously I’m thinking of Hofstader here as well, and I can recommend Fluid Concepts and Creative Analogies (with profound thanks to Jon Udell), I Am A Strange Loop, and for a rapid overview, his talk at Stanford on “Analogy as the Core of Cognition.” It’s all “metaphors we live by.”)
I think most education in our schools pretends to get to the conceptual but in fact stops at the iconic or perhaps even the enactive level. Pointing pointing pointing. Proctoring proctoring proctoring, the student always in the instructor’s presence. See-do, see-do, with “critical thinking” at a level of “see-do in the sophisticated complex way I your teacher have already imagined for you, and pointed to for you, as my expertise permits me exhaustively to define excellence for your seeing and doing.” A closed and impermissive architecture mediated through language, but not really conceptual and sometimes hardly even iconic–because it doesn’t support or represent emergent phenomena, what Bruner calls problem-finding:
Children, of course, will try to solve problems if they recognize them as such. But they are not often either predisposed to or skillful in problem finding, in recognizing the hidden conjectural feature in tasks set them….. Children, like adults, need reassurance that it is all right to entertain and express highly subjective ideas, to treat a task as a problem where you invent an answer rather than finding one out there in the book or on the blackboard. (157-158)
Like Facebook, our schools and the classrooms and curricula they provide form a walled garden full of “finding” by merely clicking on icons (including the face of the teacher, which when clicked upon may yield “what the teacher wants”), partly for administrative convenience, partly for administered intellectuality that hides our own conjectures (lest emergent conceptual frameworks undermine the power authority and wealth of the old architects), partly because it’s a good business model. Ah, the business model. Tim Berners-Lee put the Web in the public domain, and what kind of a business model is that? unless one considers it an investment made to benefit the species–a mission we say we follow in higher ed, of course.
Did I say that remaining at the enactive or at best the iconic while feigning the conceptual is a “good” business model? I meant a great business model, especially if one enjoys exploiting others without leaving visible marks, since it’s education that gives us the constrictive framework of pointing that enables, encourages, and stimulates the narrow ways we are able to imagine thinking about business models. Or even, at the level of curriculum, to imagine thinking about thinking about business models. I’ll drop the sarcasm and say that’s really bad news. If education fails us because its “great business model” and massively convenient administrative structures cannot or will not allow its participants to work at a truly conceptual level, a truly problem-finding level where the lowest and highest arenas of problem-finding are centrally concerned with learning itself, then we are trapped. There will be no portals. (The cake is a lie.)
So back to the Internet, now mapped along Bruner’s levels. The Internet permits enactive communication. Data transfer in an open and permissive network-of networks, like sensation in complexly open and permissive internal neural networks, permits a kind of data-telepresence that supports all sorts of miming-based communication.
The Web appears to be a graphical user interface for the Internet, but this is a dangerous misperception. Clicking on images (or even links, for that matter) is really no more than Bruner’s enactive level of communication. The Web is an environment for linking, which means it openly and permissively enables (encourages, stimulates), with each and every act and experience of linking and linked, an iconic level of communication that contains within it the potential of a powerful experience of the abstract and conceptual, an appeal (implicitly or explicitly) to shared experience at a symbolic level that depends on a more complex idea of other minds than the merely enactive or iconic levels of communication do.
People conflate school and education the way people conflate the Internet and the World Wide Web. Education appears to be synonymous with school, which is designed to be an environment for the focused and controlled delivery of content. This is a dangerous misperception that’s similar to the dangerous misperception that says the Internet and the World Wide Web are the same thing. I think the two misperceptions are related. One may cause the other. They may cause each other in a vicious circle. Hard to say. But the danger is the same. And Facebook is like Facebook because that’s the way we like to make a world, or have a world made for us, and school is school because we need to convince ourselves that any other way is stupid, wrong, or crazy.
At any given moment, however, there are people who, like Puddleglum in The Silver Chair, insist that there’s a better and different and more open and freer world above and outside the walls of the cave. And at some lucky moments, those people get to build something that reflects that belief. Something we can build on, too, and not simply react within.
Yes, this is like moving from Flatland into a three dimensional space. We face the same difficulty, too: how to imagine a dimension that we cannot explain in terms of the data of immediate (two-dimensional) perception? Thankfully, the two-dimensional world of Flatland has a word for “dimension,” which some Flatland Folk might become curious about. And once that curiosity is awakened, you never know, some of those folk may ask themselves whether the abstraction of “dimension” might be a portal into something real that they simply cannot experience except through that portal of abstraction.
Isn’t that something like how language works? If you think about it, doesn’t language itself seem to open up n-dimensional possibilities that lead us to co-create new realities out of nothing but thought itself? Like the poet, lunatic, and lover “of imagination all compact,” as Shakespeare has a typically dense administrator pronounce, the result is that we “give to airy nothing / A local habitation and a name” (A Midsummer Night’s Dream V.i.16-17). The dense administrator, the mighty King Theseus himself, imagines this ability to be a bug, not a feature. Poor King Theseus! Luckily he married up when he found Hippolyta, who responds to her husband’s pontification with practical visionary good sense:
But all the story of the night told over,
And all their minds transfigured so together,
More witnesseth than fancy’s [imagination's] images
And grows to something of great constancy;
But howsoever, strange and admirable.(23-27)
Minds “transfigured so together.” Too many linkings to be anything less than constant, strange, and admirable. A problem-finding education.
In Computer Lib/Dream Machines, Ted Nelson writes,
What few people realize is that big pictures can be conveyed in more powerful ways than they know. The reason they don’t know it is that they see the content in the media, and not how the content is being gotten across to them–that in fact they have been given very big pictures indeed, but don’t know it. (I take this point to be the Nickel-Iron Core of McLuhanism.)
Brilliant, but there’s more at the core: the big-picture-conveyance is not just delivery but itself a new symbol, a symbol of a specific instance (and a generalizable example) of the possibility of big-picture-conveyance. There is information about information itself, and the possibilities of conveying and sharing experience, being conveyed and shared in that big-picture-instance. Nelson’s word “conveyed” is still too close to “delivered.” McLuhan’s insight is still deeper, that what is “delivered” is always a metastatement about the conditions and means of conveyance considered largely. To put it another way, symbols do not only contain and transmit meaning. Symbols also generate meaning, the way. “link” is both a noun and a verb. A medium not only of figuration, but a figure and medium of transfiguration. Our “minds transfigured so together.”
As a species, among our many failings, we also have the wonderful endowment of brains that are bigger on the inside than they are on the outside. “Further up, and further in!” Truth, in-deed. A blogging initiative like the one going at Virginia Tech right now at the Honors Residential College is an attempt to enable, stimulate, model, and encourage intra- and intercranial cognoplasticity, the experience of “bigger on the inside than on the outside,” thus extending the inside (of a small group selected for academic ability) to the outside (which must exist in fruitfully reciprocal relationship lest the experience be merely elitism, defensive, or mutually destructive “othering”). But there’s no way to do this in our newly mediated environment without asking people to narrate, curate, and share on the open web. Until one speaks a language, a word is only a sound (an enactment). Until one reads a language, a word is only a picture (an icon). Until one writes in a language (or medium), one cannot imagine or experience or help build the portal to the thinking-together, the macro-cognoplasticity, the networked transcontextualism, the planetary double-take, that represents the next dimension we need (and desire and dread, too). Our goal is to become first-class peers for each other. Conceptacular colleagues, not just rowers in someone else’s galley.
And here I conclude for now. We have, if we choose, the ability to maintain the open, permissive architecture of the Internet and the open, permissive architecture of the Web that resides within and emerges from the Internet. If we choose to preserve the open, permissive architecture we have been lucky enough to build and lucky enough not to wreck quite yet, we may move to the third level of communication Bruner notes: the conceptual, abstract, symbolic level. For the Web is a network of links, but to call it that is to approach the realization of the next level of understanding, the mode of conceptual communication and enactment (yes there is recursion here) Bruner terms the symbolic. The World Wide Web is not simply a collection of links but an enactment of, an icon of, and an idea about (a symbol of) the complexly open and permissive activity we call linking, out of which we build together the linked and linking and open-to-linking realities of our next stage of cognoplasticity as a species.
This must also be the figure an education makes.Education is the technology that amplifies and augments the natural process of learning. Education brings Flatlanders to consider “dimension” not just as an experience one enacts or points to (a line can go this way, or that; see; now let’s test you to see if you remember that) but as a symbol that can be abstracted from experience and thus (paradoxically) lead to greater, more complex, more possibility-filled and possibility-fueled experience. To use Hofstadter’s language, education must partake of, and stimulate, and empower, the experience and emergence and creation of strange “level-crossing loops.”
It is no accident that computers and cognition and communication and education have been so intertwingled in the history of our digital age. From Charles Babbage, Ada Lovelace, and Alan Turing onward, all along the watchtower where the resonant frequencies are transmitted and received, a “wild surmise” about learning within and among the amphitheatres and launch-pads of shared cognition has accompanied each development in the unfolding n-dimensional narrative of unfolding n-dimensional possibilities and awakenings. It’s exhilarating in that tower, and exhausting as one strains to see distant shifting shapes. It’s cold, especially in the darkest moments. Or so I imagine.
It’s Ted Nelson day in VTNMFS-S13, the Virginia Tech New Media Faculty-Staff Development Seminar. I can see from the motherblog that Ted is already stirring up plenty of response, from delight to alarm, sometimes in the same post. As is his wont, of course. That’s part of why Ted Nelson is on the syllabus. That, plus his brilliance and zany rhetorical adventurousness. And what appears to be his utter fearlessness–fearlessness, no recklessness–in speaking truth to power.
Yes, Ted Nelson: a handful. Imagine him in your class, fellow teachers. A handful. Yet say what you will about Ted, he’s clearly someone who trusts the learner. Ted Nelson trusts the learner. Once more with greater emphasis, perhaps ruining the fantics, but here goes: TED NELSON TRUSTS THE LEARNER.
Most schooling does not.
Yes, we do not trust eight-year-olds to drive cars, vote, get married, drink, watch re-runs of Love, American Style and so forth. These activities are not developmentally appropriate. And for the boys in particular, there’s a real lag time in forebrain development, which means their judgment sometimes isn’t what it needs to be until, oh, later. (Sometimes much later.)
But do we trust students to learn?
Most schooling does not.
Again, there are some sound developmental reasons, but just between us (ok?), they can start to sound kinda funny after awhile, as they become unquestioned assumptions that we shellac with layers of cracked and tobacco-colored varnish (read: curricula). For example, we obviously can’t trust learners to love and purse the learning of math (or whatever subject strikes you as hard and frustrating) unless we require it. Right? So the answer is to require learners to learn math. Right? They wouldn’t do it otherwise. They’d just skip it, the way kids skip their spinach and go straight for the soft-serve processed sugary confections. Here’s what’s funny about that question. It assumes there’s no good or practical way, short of a mandatory “forced march across a flattened plain,” to awaken enough intrinsic interest or curiosity to prime the pump for a learner to learn math because he or she wants to. No way to tap into the learner’s own ability (or, if they’re young enough, their propensity) to be intrigued. “Priming the pump” takes too long, is too complex, requires too much teacherly skill, won’t scale, and so forth. So we take a shortcut. We require students to take math. It’s good for them. And they won’t take it otherwise.
That’s a funny set of assumptions, and I don’t mean “ha ha funny.”
The result that I have observed over several decades of teaching is that we believe that a required course somehow obviates the need for awakening internal motivation, interest, curiosity etc. Those internal states are dodgy, difficult, complex, messy, hard-to-assess things. Why engage with that complexity when we can just say “hey you, take this or else” and be done with it? Why take a chance on love when in the end we don’t believe love matters as much as dutiful compliance? Duty is important, to be sure, and strong, well-aimed habits can carry one through the rough spots–but those same habits, as we shall see, can and often do lead to the tragedy Nelson limns in one of the bleakest parts of his essay:
A general human motivation is god-given at the beginning and warped or destroyed by the educational process as we know it: thus we internalize at last that most fundamental of grownup goals: just to get through another day. (my emphasis)
I ask you, hand on heart, do we really think we will raise a nation of quantitatively literate and engaged citizens because of required math courses? Sure, sometimes a required course is able to awaken an interest that would have lain dormant otherwise. Even a blind pig finds an acorn now and again. But is that really the best we can do? Would we put our faith in a pain reliever that seems to increase pain for 80-85% of the population while somehow miraculously relieving pain for 15-20%? (These are liberal estimates of benefit and conservative estimates of pain.) Hand on heart, looking into each other’s eyes: do we think required courses do much more than salve our conscience and give our students and us a way to play “let’s pretend” much of the time? The evidence I’ve seen, personally and professionally and across many different contexts, does not support the idea that a “forced march across a flattened plain” does more good than harm. Rather the opposite.
Ted Nelson trusts the learner. Most schooling does not. Blogging in my classes is many things for me, but at the top of the list is blogging as an exercise in trusting the learner–or as Carl Rogers puts it, “freedom to learn.” I assure you that this is not a safe thing to do. That’s part of the point. One can fail, and the failing matters. It’s not so much that school gives us a place where one can fail without that mattering. If failure doesn’t matter, in that sense, how is it really failing? Rather, school gives us a chance to experience a different truth about failure by uniting us in a community of delighted strivers and yearners: death alone excepted, failure isn’t final.
But, what if in the course of life, something that seemed personal at 20, and blogged about freely, is something you would prefer to have in the private column of life at 25, 30, 40? I’m quite sure, many of us greying adults are very happy there are no digital footprints of our personal thoughts easily found on Google from our days in college.
I hear this observation a lot as I travel around talking to folks trying to make sense of schooling in a digital age. It’s an observation that emerges from a hard and worthy question. “I’m quite sure, many of us greying adults” indicates a steadfast conviction that those of us with the hoarfrost (or worse) turned a corner sometime in our early development and are very glad that no one can see what preceded that turn. What if we entered college as staunch (insert political party here) and then got smarter and wiser and became staunch (insert political party here). Isn’t it great we don’t have any images of our younger selves at Young Republican meetings that can be found on Google? Isn’t it great there are no pictures out there of our youthful dalliance with Students for a Democratic Society? Aren’t we glad that we didn’t share photos on Facebook or on our blogs of those times we went to churches we thought we believed in when we just weren’t sophisticated enough to know that we simply believed in what our parents taught us to believe? (Thought in passing: one rarely encounters the idea that increasing intellectual sophistication can lead to religious belief as well as away from it, though I know that it can and has.)
Aren’t we glad we don’t have to run for President when YouTube shows us apparently drunk? (I’ll leave that as an exercise for the astute reader.)
Aren’t we relieved that there are no pictures like this on the Internet to embarrass our wiser, older selves?
Kids these days. I googled “spring break photos” and this one seemed safest to post here.
I mean, if this is what kids are voluntarily uploading to the Internet, or are having to endure being uploaded to the Internet, how in the world can we trust them as learners, give them a global printing press, and say “please put more stuff up on the Internet”? Won’t we just set them up for more future embarrassment? Aren’t we simply contributing to the grand and ultimate erosion of the right to be silly, the right to be debauched, the right to be young, be foolish, be happy (well, happily intoxicated anyway) without those lapses following them around like a blinding hangover for the rest of their lives?
Perhaps. Perhaps the young men and women in this photo will be forever barred from positions of responsibility and leadership because their employers will see this photo (or worse yet, rely on a sophisticated algorithm to find the applicant’s face in every photo on the Internet) and say “I could never hire such an irresponsible and reckless person, especially if they’re also too stupid or careless to prevent the publication of their stupid carelessness to the World Wide Cesspool.” People enjoy judging other people’s lapses. Perhaps such judges, much better at hiding their own lapses (lapses that often mutate and persist into adulthood), will see such photos and blackball everyone in them. Seth Godin tells such a story:
I almost hired someone a few years ago–until I googled her and discovered that the first two matches were pictures of her drinking beer from a funnel, and her listed hobby was, “binge drinking.”
Note, however, that the woman was doing a good thing in a bad way. She was crafting an identity and publishing it to the Internet. As Jon Udell argues, One Will Be Googled, and that should lead us not to resist publishing to the Internet, but to publish thoughtfully and well to the Internet so that the googling reveals things about us we want to be known. In the language of part one of this post, the googling should reveal us as persons while not tossing aside our privacy. The young woman in Godin’s story has of course done exactly the opposite. I have no idea who she is as a complete person. And it may well be that she advertises herself as a binge drinker because she hasn’t had much help–certainly not much help from schooling–in thinking about herself as a complete person and how to demonstrate that thinking in a generous, connected way on the Internet not as a result of a prescribed and proctored curriculum, but as an ongoing commitment.
I suspect her schooling hasn’t helped her, not because she was shown too much trust as a learner, but because she was shown too little. Perhaps she knew and did her duty in school: comply! prepare! speak when you’re spoken to, until we say you’ve learned enough to earn the right to speak in the modes of sophistication we have prescribed, and in a way that will shield you from the embarrassment of admitting you’ve failed or ever been mistaken, an admission that can be evaded unless the mistake was public, which it would have been if we had encouraged you to narrate your learning and publish that story to the Internet!
Of course I cannot say for sure that the recklessness of publishing your private enthusiasm for binge drinking to the Internet is in any way correlated with a failure of education, a failure linked to our desire to protect our learners from themselves that may emerge from a lack of trust, but I think it is reasonably clear that if we truly desire to protect our learners from themselves, we are failing. They are publishing to the Internet no matter what we say. Human beings typically want to connect with other human beings. Those energies will find an outlet. And my argument here is that we should not be protecting our learners from themselves. We should be trusting them, and aiding them in discovering and using (and teaching us, too) the arts of freedom.
Those arts are not simply the arts of abstinence. Milton writes, “I cannot praise a fugitive and cloistered virtue.” Me neither, though the result is sometimes a commitment to enduring conspicuous failure as one builds what Godin calls “a backlist.” John Dewey observes that “education … is a process of living and not a preparation for future living.” The extent to which we share with each other our growing processes of living, in all their complexities and inconsistencies and false starts and unexpected delighted discoveries and embarrassing stumbles, is the extent to which we are committed to the idea that we could build a better world together.
Managing a learning environment such as this poses its own unique challenges, but there is one simple technique, which makes everything else fall into place: love and respect your students and they will love and respect you back. With the underlying feeling of trust and respect this provides, students quickly realize the importance of their role as co-creators of the learning environment and they begin to take responsibility for their own education.
Love doesn’t mean keeping your students safe by teaching them abstinence. Love means keeping them safe by teaching them the arts of freedom. Abstinence is one of those arts, of course, but only one, and in my view one of the lesser of them, perhaps the least of all. Abstinence in this sense would mean abstaining from learning itself, a devotion to the idea that “ignorance is bliss.” Education is devoted to the idea that any such bliss is not a bliss worth having, and certainly not a building material for a better world. At least, that’s what we profess, implicitly, by taking money for teaching, research, and service.
What is the alternative to the kind of risk I’m urging us to take? Or to put it another way, what is the risk of not taking these risks of narrating one’s person (not one’s privacy), as that personhood emerges and develops, in a community that conspicuously supports the goal of knowing even as we are known? What if we do not engage with the kind of love that trusts the learner with the intensity and urgency of a Ted Nelson?
In The Four Loves, C. S. Lewis describes the potential horrors (one might say, a “learning outcome”) of a refusal to engage with this higher love:
There is no safe investment. To love at all is to be vulnerable. Love anything, and your heart will certainly be wrung and possibly be broken. If you want to make sure of keeping it intact, you must give your heart to no one, not even to an animal. Wrap it carefully round with hobbies and little luxuries; avoid all entanglements; lock it up safe in the casket or coffin of your selfishness. But in that casket–safe, dark, motionless, airless–it will change. It will not be broken; it will become unbreakable, impenetrable, irredeemable.
For there is another terrible risk in allowing our younger selves to persist alongside our greying ones: those younger selves may judge us, and not too kindly. Emerson warns us of the risk of not learning the habit of trust:
A man should learn to detect and watch that gleam of light which flashes across his mind from within, more than the lustre of the firmament of bards and sages. Yet he dismisses without notice his thought, because it is his. In every work of genius we recognize our own rejected thoughts: they come back to us with a certain alienated majesty.
I am sorry for the androcentric nouns, but I am not sorry for the sentiment. Notice that Emerson does not say we should “detect and watch that gleam of light … from within” instead of “the firmament of bards and sages.” He says “more than,” and everywhere in “Self-Reliance” argues that it is that inward watching, that very habit of welcoming and not dismissing or rejecting one’s thoughts, that is the greatest lesson we may learn from the bards and sages we encounter in our education. We recognize our kinship, our responsibilities, through the inward watching and welcoming we learn from those who have done that before us–and those who naturally do it now, before schooling has diminished that trust:
What pretty oracles nature yields us on this text, in the face and behaviour of children, babes, and even brutes! That divided and rebel mind, that distrust of a sentiment because our arithmetic has computed the strength and means opposed to our purpose, these have not. Their mind being whole, their eye is as yet unconquered, and when we look in their faces, we are disconcerted…. Do not think the youth has no force, because he cannot speak to you and me. Hark! in the next room his voice is sufficiently clear and emphatic. It seems he knows how to speak to his contemporaries. Bashful or bold, then, he will know how to make us seniors very unnecessary.
What if we encountered our younger selves speaking clearly and emphatically in the next room of our “backlist“? What would they say to us? Would they commend us for the compromises we have made, some of them necessary, some of them pretended so for the sake of our supposed dignity, our need merely to make it to the end of another day? Would our younger selves embarrass us with their energy, their hopefulness, their strong and happy rejection of our rebel and divided minds? Would those younger selves haunt us with their alienated majesty?
What might we see at eighteen, or even at eight, that might forever elude our maturer sight? Could we lay in stores of our youthful visions to feed us, encourage us, and occasionally chide us as we encounter the wearying discouraging complexities of our adult lives? Can we trust our younger selves, and the younger selves around us, and the younger selves for whom we invent the future, a great cloud of witnesses around us, past and present and future, not to point a reproachful finger at us, disdaining our cautions, indicting our impenetrable unbreakable hearts?
Late one night when sleep wouldn’t visit, I stumbled across a stirring and revelatory documentary called Joni Mitchell: Woman of Heart and Mind. I’ve long loved Joni’s music and the sensibility behind it. I once gave a talk in which “Amelia,” my favorite of her songs, played a central role. I know few artists who are as consistently witty, poignant, and searching as Joni Mitchell. Funny, too. She has been an essential companion, even or especially when I had to strain to bridge what seemed the distance between her older, more sophisticated and artful life and the life I was trying to shape as an adolescent growing up in southwest Virginia, not so very far from where I’m typing these words.
Though she is most commonly typed as a “singer-songwriter,” in truth Joni Mitchell is as far from that folk-derived genre as I can imagine, largely because the structure of her songs is so unusual and exploratory–while also very often being as catchy and propulsive as a good pop song. How she can combine those apparently incompatible excellences is a good question. Perhaps it has something to do with her habitual use of open and unusual tunings for her guitar. When her version of “Urge for Going” was released several years back, commentators noted it was one of the very few Joni Mitchell songs to use the standard E-A-D-G-B-E tuning. The other songs, well, not so much.
Which brings me back to the documentary, and perilously near my point. At one moment early in the film, the topic of Joni’s tunings comes up, and Joni herself speaks to her renowned oddity in that department. What she says has haunted me ever since. She says that she thinks of her unusual chords as “chords of inquiry,” and presents them as if there’s a question mark after each one.
“Chords of inquiry.” A harmony that proposes exploration and curiosity. Notes resonating together but not reaching a conclusion or advancing an argument.
The phrase itself sounds such a music: “chords of inquiry.”
This is the music I yearn for and try to encourage in our Awakening the Digital Imagination seminar each time we convene–in fact, each time the seminar has convened from its very beginning (as a faculty-staff development seminar) back in 2009. It’s not an easy music to sound, especially with a pride of highly trained academics all ranging the veldt of the seminar meetings (online and in real space), all ready to engage with (necessary, certainly) critical thinking, subtle distinctions, spirited polemic, all the academics’ discursive tooth and claw. That which does not kill us, etc. And besides, this is what we (and I do mean we) were taught to do in graduate school. To inculcate a kind of ruthlessness, a kind of skepticism and scrutiny before which all wooly thinking would simply wither.
And yet, what of these chords of inquiry? I do think a provisional acceptance of the essential frameworks of each essay we read, a kind of readerly version of Keats’ “negative capability,” can animate a renaissance of wonder and is indeed a good spiritual discipline in itself. I think of the distinction I was taught at Baylor University by my late colleague Susan Colon, a distinction between “implicative criticism” and “argumentative criticism” she worked through in her review of Andrew Miller’s The Burdens of Perfection: On Ethics and Reading in Nineteenth-Century British Literature:
Implicative criticism, according to Andrew Miller, is writing in which the writer’s thinking is unfolded and made visible to the reader so as to generate a multiplicity of responses, all of them transformative. Its foil, argumentative criticism, seeks closure rather than disclosure; it elicits agreement or disagreement but not transformation.
Argumentative criticism is the coin of the realm in academia. We are rewarded for it, and give up our claims to depth of knowledge and sophisticated methodologies when we do not practice it. Yet implicative criticism is every bit as important, as any sympathetic reader understands. It may be even more important, ultimately, if we do indeed seek transformation. Implicative criticism does unavoidably put the self at risk, it’s true. And some things do need protection, and a vigorous argumentation to pursue that need.
Yet among the many heard and unheard melodies that play through my mind, the chords of inquiry bring the deepest haunting and the most powerful insights. The writers we read in this seminar sound to my ears many deep chords of inquiry, as they imagine Doug Engelbart’s “thought vectors in concept space,” as they strive toward Alan Kay’s beautiful aphorism that “the computer is an instrument whose music is ideas.” Each chord followed by a question mark, like Vannevar Bush’s provocative little “presumably” as he ends “As We May Think.”
Unresolved, yet yearning, and musical for all that.
What do we know, but that we face
One another in this place?
W. B. Yeats, “The Man and the Echo”
I spend a lot of time talking to academics about social media. I field many frequently asked questions and try to speak to many frequently voiced objections. Sometimes the effort is exhausting or even exasperating, particularly when the questions are really objections in disguise. Answers aren’t much use in that case. Other times, however, useful distinctions may emerge–useful to me, at least, and perhaps to others as well.
One of the typical questions has to do with how “personal” social media are, and how troubling that can be for academics. First, I have to unpack “social media” a bit, and begin to distinguish between blogs, Twitter, Facebook, and the rest. These are all “social media,” yes, but they are very different in practice, with different challenges and opportunities. After these distinctions, though, I’m still faced with the core question: what’s valuable about the personal element in these media? Why should I care? And why should I make myself vulnerable by sharing my personal life with the world?
There are many implications and assumptions hidden in the questions. Those who want to cleanse discourse of the personal seem to assume that “personal” means “irrelevant to anyone else,” or “ephemeral,” or “trivial.” The classic example is “what I had for breakfast.” (I’m on the wrong networks, obviously, as I myself don’t see breakfast tweets or blog posts or Facebook status updates.) Yet there’s also a thread of fear in these dismissals and objections, a fear or even a defiance that I acknowledge and take seriously. In this sense, “personal” also means “none of your business,” and “too dangerous to share.”
So I’ve begun to distinguish “personal” from “private.” The idea is that “private” means “don’t share on social media.” “Private” belongs to you, and you should always be vigilant about protecting your privacy. Without privacy, our agency is diminished, perhaps eliminated. Without privacy, we cannot generate or sustain the most intimate bonds of trust. Without privacy, our personhood is at risk.
But what of the personal, as opposed to the private? I believe the words are not synonyms. Instead, I believe private is a subset of personal.
I think those aspects of the person that are not private not only can be shared but ought to be shared. This is what we mean when we tell writers they should find their own voices. This is what we mean when we say we seek to “know as we are known,” as Parker Palmer insists. This is what we mean when we talk about “integration of self,” when we speak of our concern for “the whole person.” It is only when we bring the personal (not the private) to our discourse that we understand the rich complexity of individual being out of which civilization is built–or out of which it ought to be built. The personal keeps our organizations from becoming mere machines. The personal preserves dignity and community. The personal brings life to even the most mundane and repetitive operational tasks. We neglect or conceal the personal (not the private) at our peril.
I tell my students that I have only two rules for us in our work together: “passion encouraged; civility required.” The passion is always personal, as is the civility. The forbearance we show each other within our civility is a personal respect for the other, which also means a respect for the complexities of their privacy, complexities hinted at, though not made visible, primarily through the extent to which we share our personhood.
The Oxford English Dictionary entry for “person” offers many fascinating definitions, but the salient one for what I’m exploring here is definition 3a:
The self, being, or individual personality of a man or woman, esp. as distinct from his or her occupation, works, etc.
The personal is who we are “as distinct from [our] occupation, works, etc.” Our occupation and works are the result of effort, luck, ability, connections, a whole host of purposeful and chance occurrences. But we are not defined by our works and occupation. We are defined by something larger and more elusive, and more dynamic too. Sharing that larger, more elusive, and more dynamic aspect of selfhood is valuable, reminding ourselves and those around us that all of us are more than we appear to be in any particular transaction or encounter. Such reminders encourage humility. They also encourage a kind of exhilarating anticipation, as one never knows which humble or exalted personage may be one’s unmet friend, an angel to entertain unawares.
Sharing the personal, as distinguished from oversharing the private, means engaging with personhood in all its messy and glorious complexity, and all its potential, too. If, as Jon Udell reminds us, “context is a service we provide for each other,” the context is not merely informational, nor is it about matters that should remain private.
What might a Huge-LQG and a motherblog have in common? The first challenges our basic assumptions about what we think the nature of the universe is, and the second enables us to challenge our basic assumptions about what we think the nature of education is.
You already know that a ‘motherblog’ is, in this context, anyway, the veritable mothership that pulls in and collects the many voices of each individual blogger; in this case, the blog of each individual Padawan in this semester’s group of GEDI Knights-in-training. The GEDI motherblog aggregates and allows us to share the musings, epiphanies, shared insights, and so forth, of each unique GEDI blogger.
But you may be scratching your head about the LQG acronym, so let me explain…. Last week I was reading about the “biggest thing in the universe,” a Huge-LQG, or Huge-Large Quasar Group, that challenges our very assumptions about the nature of the universe. Yep. British astronomers have identified an object so large that it turns on its head the so-called cosmological principle, which essentially argues that if you’ve observed one segment of the universe, you’ve observed ‘em all–that the known is similar to the unknown. Well, apparently not so much. That principle has just been grandly disrupted by the Huge-LQG identified in the UK.
Wow. So, here’s the thing, blogging can be a bit like that, metaphorically speaking. The newly discovered Huge-LQG is made up of 73 quasars, each at the center of its own galaxy. Our own GEDI motherblog is a cluster of 50 individual blogs, each at the center of its own galaxy, so to speak, and each potentially quasar-like in its vision and insight. And should I, or any other reader, assume for a moment that all blogging will produce the same results, we will have our assumptions grandly challenged. The blogging platform empowers a self-reflective voice and exploration, and the ‘invitation’ to do open and honest self-reflective engagement that is too often missing from most corners of academe. There is a self-reflective engagement that can occur when one jumps full in and embraces what contributing to the blogosophere invites. While journal publications have an important place in scholarly endeavors, so, too, perhaps does blogging. More than we might think. The blogger who wanders up, down, over, and through a topic or idea and takes us along for the ride, the blog post that does not resemble a formal writing assignment and does not rely on specialized, disciplinary language (or in worst case scenarios academese)–because formal writing assignments have rules and regs that are so specified as to directly or indirectly control and contain the intellectual exploration and journey the assignment was no doubt meant to engender–that is what I get excited about with the blogging initiative in GEDI. Each semester I look forward to what the mothership brings as it gathers all of these remarkable quarks, or in this case quirks, of unique intellectual and affective engagement.
And, oh my, my first ventures into the remarkableness of the spring 2013 GEDI motherblog have not disappointed. There are several fabulous forays into interesting questions, but in this post I want to comment on three bloggers who have left me astounded, gobsmacked, as Dr. C is fond of saying, with their open and amazing blogging and their willingness to dive right in and be bold and curious. Brandon started us off right out of the gate with an articulation of his hesitation about blogging, but then his post evolves into a delightfully open exploration about what his resistance has been about, and he shares his epiphany that perhaps communication about science in some of the informal ways that blogging provides is a way to get science back into mainstream society. He has created a professional web presence and decided that the Kool-Aid is something he will try, rather than shun without taking a sip. His rallying cry, “Let’s do this!” inspires us all to see if we may be parched and thirsty for the network of connection via blogging.
In her post entitled The Reluctant Blogger Laurie explores what we all fear about putting our thoughts out into the open blogosphere, primarily that we run the risk of “being poorly received” and that we may be “disagreed with, laughed at, scowled at,” and that we risk being judged. Laurie decides, however, that part of what we do when we make progress in any academic area and part of what we should be doing as public intellectuals is to recognize that it is “really essential to have disagreements.” We all fear being laughed/scowled at or harshly judged, but that would extend to other intellectual endeavors as well–conference papers, grant proposals, journal articles, oh, and yes, teaching that is fully present and actively engaged. Fear of disagreement or the fear of putting our ideas out there is not, I think, limited to blogging. (I actually think that blogging may make our other academic work more interesting and invigorating and help us discover our ‘voice’ in different kinds of writing genres.) To Laurie’s insights I would add some inspiration and wisdom from Seth Godin, who reminds us that not everyone will like everything we write and that we just need to shrug our shoulders and realize that it’s not for them and keep moving forward. Learning to teach shares this sentiment, too. What is most important is diving in and defining how to be the public contributor we believe is part of our role and responsibility as 21st-century academics. Brava!
But one among us has set the bar high, at least for me, in her story of how crosssing over the line from silent observer to blogger is a rubicon of sorts. Her take on the blogging ‘requirement,’ is such: “If not for these courses, I would not be taking this step. While the the requirement is forcing me out the door, the journey is still ultimately mine. So, stick around. I might just have a few interesting things to say.” Sho’nuf and bam her next post explores a recent APLU initiative and the trouble with NCLB’d undergraduates with her passionate engagement on the topic delightfully present. Nice! But her third foray into blogging caught my attention in the first two lines: “I noticed something this week. I am interested.” Yes, indeed, interested and interesting. Give me more, I think, and she does:
“. . . I think I’m starting to shake off the shackles and the dust from massive burn-out at my former job where I had become bored, uninterested, and dispassionate. It’s the closest I have ever come to feeling ambivalent. It actually frightened me. I worried that I might not ever be interested in anything again. What if I’d crossed the event horizon on a massive black hole of disinterest and boredom? Apparently, I had not crossed it. I just spent a little too much time in the ergosphere. Perhaps listening to people like Dr. Gardner Campbell, Sir Ken Robinson, and others is having an impact. The idea of exploration is appealing again. But now, I’m wondering how many students today are perpetually stuck in the ergosphere of an antiquated and inadequate education system that doesn’t provide the necessary tools? How many get lost, and how many have crossed the event horizon, possibly never to return?”
I am so sticking around. You inspire, lgm, and the metaphor of the ergosphere and the event horizon you use is beautifully apt. There is quasar-like energy in the analysis you provide of what may happen if we don’t explore better ways to educate all learners. Did I say I’m sticking around? I want to follow along as you explore what MOOCs can and cannot do in their current incarnation, or what they should be doing and what they may tell us about learning in our f2f courses as well. I’m interested. Btw, if you haven’t run across it yet, you won’t want to miss Clay Shirky’s latest post on MOOCs. A morsel for you: “MOOCs are a lightning strike on a rotten tree. Most stories have focused on the lightning, on MOOCs as the flashy new thing. I want to talk about the tree.” His links within the post will connect you to the debate he’s been a part of. (Those amazing links, again, and no waiting for the debate to occur over several issues of a journal.) Shirky tries to turn our attention toward a clear and honest examination of what we’re doing in higher ed (or not doing well), rather than just presume higher ed is doing a grand job, as is, and foolishly hope that MOOC mania will soon cease and desist and all will return to ‘normal’ once everyone gets over the fuss. You might like the Shirky post, too, Brandon, since you are exploring the attack of the MOOC and future prof, you might find it interesting reading as well.
I’ve just highlighted three GEDI bloggers here, but the motherblog is full of interested and interesting colleagues. Sarah Hanks is very much leading from the inside out and is challenging herself to find both the courage and the time to be her best self for her students so that they will step up because “isn’t that what teaching is about? To be vulnerable enough to teach differently”? Yes, indeed, yes. Jack talks about his desire to find “[l]earning systems that promote knowledge generation, encourage practitioner / student participation in learning, and enable social change.” Young Rhodes Walker will no doubt be letting us know how blogging might impact STEM classes. Juan prompts us to think about the evolution and potential power of teaching with technologies, and our own smithing god has shared a video and his musings on how difficult the work of changing our pedagogy will be. And, huntingmaddness, we’re looking forward to what you tell us about your vinyl collection in a future post. Ivy ponders what makes for a learning environment that will generate engagement and critical thinking. We’ll be reading and watching to see what SansSucre thinks of some of Jane McGonigal’s games for social change once there’s been an opportunity to explore some of them. To the Macroworld of Microbes…good question about how/if blogging can work in a microbio lab–you might want to check the blogging wonderfulness of Dr. Jill Sible and what she models for her students, with whom she blogs. Taulby asks some interesting questions about blogging and democracy, and Sascha is pondering leadership issues and the political economy of blogging. I have missed mentioning some of the GEDI bloggers here, but I will keeping reading. I am unlikely to do a Homerian ‘catalogue of ships’ in each of my posts, but we should all connect to and reference each other whenever appropriate. Links are a wonderful thing.
This post concludes the series I started publishing last Monday. I hope to write an epilogue soon in which I reflect at somewhat greater length on what I wrote two years ago. (Contrary to widespread belief, reflection often disrupts more than it consolidates–see for example Hamlet.) For now, reading over these words, I find my sense of urgency has grown, not diminished. More than ever, I believe we must empower those whom Seymour Papert calls Yearners.
More than ever, I believe that the first college or university that finds it way to a deep understanding of Alan Kay’s beautiful aphorism, “the computer is an instrument whose music is ideas,” and can nurture the playfully serious courage to let that understanding pervade its communal life, will have accomplished something of extreme importance for the public good.
As for my confidence that higher education can rise with these challenges–well, it depends on the day you ask me.
The task is the same now as it ever has been, familiar, thrilling, unavoidable: we work with all our myriad talents to expand our media of expression to the full measure of our humanity. –Janet Murray, “Inventing the Medium”
A pattern emerges within all this discussion, a fractal pattern of similar principles and conceptual frameworks. We can identify this pattern with the help of Steven Berlin Johnson’s Where Good Ideas Come From, which traces significant innovation and invention over the long sweep of human history.His conclusion is that combination and recombination of what he calls “the adjacent possible” fuels growth and innovation. The principle is the same as for emergence, and as difficult to imagine: network effects that appear in a macrostate are not yet visible in a single instance or microstate. One cannot have a flock of bird. Yet knowing that, we can begin to understand the possibilities of self-stimulating, self-organizing structures, and can begin to build platforms in which the range and number of adjacent possibles are increased and best positioned for success.
Committees are often called “graveyards for good ideas.” At their best, however, committees are excellent platforms for emergence. The most exciting and productive instance of the adjacent possible is two trusting and inventive colleagues in conversation with each other. If the extraordinary success of the Internet and the Web has taught us anything, it’s that conversations within networked, interactive computing environments can scale and generate an emergent “wealth of networks” far beyond our expectations. Going forward, we can design such an environment by awakening the digital imagination, empowering faculty, staff, and students as digital citizens, and creating “hubs” or “nodes” of conversation that are linked internally and externally in a network of innovation. Whether we call this network a “skunk works” or a flotilla of pirate ships, we must empower this network not only to invent but to reinvent. If we are to create and innovate within the extraordinary disruption of the digital age, we must not insulate ourselves from disruption, for that would be to reject the global conversation itself. We must build curricula, learning environments, learning opportunities, and organizational structures that foster the capacity for collaboration and self-surprise within a framework of shared values and goals.
As it happens, interactive computing was invented for that very purpose: to symbolize and share the richness of cognition. Douglas Engelbart, the father of interactive computing, wrote these stirring words in the essay that would eventually launch the Internet itself, “Augmenting Human Intellect: A Conceptual Framework”:
We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.
No self-respecting institution of higher learning would neglect these principles, as they are the foundation of educating our citizens for maximum agency and contribution to a democracy, a form of government that is itself a model for reinvention of the kind we are discussing here.
MIT’s Seymour Papert devoted his career to the idea that interactive computing offered a new mode of experiential learning. In 1993, he published a book titled The Children’s Machine: Rethinking School In The Age Of The Computer. In this magisterial and also deeply personal work, Papert distinguishes “Schoolers” from “Yearners.” “Schoolers” are surprised and even indignant about the need for “megachange.” By contrast, Papert writes, Yearners “do not say, ‘I can’t imagine what you could possibly be looking for,’ because they have themselves felt the yearning for something different.” If Virginia Tech is to invent the future, it must empower its Yearners. It must help to awaken their digital imaginations, give them the tools, responsibilities, and freedoms of digital citizens, and help them build platforms to support and foster emergence despite the risks and failures along the way. Only some of the obstacles to inventing the future are technological. Most are cultural. Here too Papert’s insights are instructive:
My overarching message to anyone who wishes to influence, or simply understand, the development of educational computing is that it is not about one damn product after another (to paraphrase a saying about how school teaches history). Its essence is the growth of a culture, and it can be influenced constructively only through understanding and fostering trends in this culture.
Thus a task force on instructional technology inevitably becomes a task force on institutional mission and culture. The difference, of course, is the difference computers make. Surveying the landscape and visible horizons of a digital world as digital citizens with a fully awakened digital imagination, we may plausibly conclude that computers, properly understood, can make all the difference indeed.
What made all these things [the emerging technologies of interactive computing] work so well is that they were empty inside. Almost skeletal. Hard to believe there isn’t more to it. I asked one of my mentors how this could be and he said it has to be that way. If it’s complex it can’t work until it’s empty. These days we have another way to describe this, my friend and former colleague David Weinberger called it Small Pieces, Loosely Joined. I’ve never heard a better description of the architecture of the Internet.
–Dave Winer, Let’s Build A New Internet In Academia
Can we build a Meta University within universities as well as among them? Any university that wants to be a leader in the digital world must do so. The most effective contributions to this Meta University will come from those institutions that walk the walk within their own structures. That is, the organizational structures that will most effectively invent the future and lead education into a new millennium will be those in which the organizational structures are themselves “accessible, empowering, dynamic,” those that are “communally constructed framework[s] of open materials and platforms.”
We know we need robust infrastructure: high-capacity, high-bandwidth connections, both wired and wireless, and ubiquitous throughout the campus’s physical spaces; flexible, reconfigurable learning environments; support for faculty, staff, and students; easily accessible and navigable digital repositories, and so on. We can identify these needs fairly readily, even if we do not yet know how we will design or support the resources that meet them. Once again, however, the real challenge is cultural. In addition to specific goals like the ones enumerated above, the organizational subcommittee consistently uses words like “flexibility,” “collaboration,” “sharing,” “integrating,” and most challenging of all, “nurture and develop.” These are words that point to attitudes and values. These are cultural words. How can we inculcate such a culture at a large research university with over 3,000 faculty and over 30,000 students, plus staff and administration?
Once again, we should look for a guiding principle to the Internet itself, in particular the World Wide Web. In “Small Pieces Loosely Joined” (www.smallpieces.com), his classic work on the design and organizing function of the Web, David Weinberger writes, “the Web gets its value not from the smoothness of its overall operation but from its abundance of small nuggets that point to more small nuggets.” The challenge for an organization, then, is to identify those nuggets, teams, and services that provide real value and organize them not into a tight structure but into a set of flexible, networked links: small pieces loosely joined.
Large organizations function in almost the opposite way: huge pieces tightly joined, or perhaps even worse, huge pieces completely disconnected from each other. The challenge is one of communication within a structure that empowers each person to create links among the small pieces loosely joined. Again we must ask, where are these conversations possible (answer: everywhere), and how can we foster them? Ironically, task forces and special committees are often the first time people from clearly interdependent areas come together to voice their perspectives and articulate common goals. Here leaders in the Registrar’s Office share their hopes and frustrations with leaders from the College of Architecture and Urban Studies, or with leaders from IDDL or CIDER. Here the conductor of a laptop orchestra brainstorms with an education researcher, the dean of undergraduate studies, and the chief information officer. We must instantiate these conversations more regularly and widely. Such conversations not only generate solutions and ideas, but also identify and begin to link those small pieces loosely joined. Again, leadership is key. A task force clearly signals the priority and urgency the institution has given to the conversation. To stimulate more of these conversations, we will need more such assignments, more such signals from our leadership.
We have already seen how Google sends these signals to its employees. It’s instructive also to recall Apple’s beginnings. When it came time to design the Macintosh, a group physically relocated to another building on the Apple campus and literally flew a pirate’s flag from the rooftop. When the Macintosh was finished, the first ones included reproductions of the signatures from the entire project team inside the case of the machine.
Metaphorically speaking, our approach to organizational structures for 21st- century digital leadership must be one in which talented, committed workers have the chance to be pirates (i.e., innovate dramatically, even radically) as well as the chance to sign their work, even if only they will know the signatures are there. Instead of silos, we must build platforms for invention and reinvention. The “wealth of networks” described by Yochai Benkler can emerge among us if those platforms are fundamentally platforms for conversation, and if that conversation is encouraged to imagine and embrace risk for the sake of renewal and invention.