Hydroponic Studio

We have understood content delivery in two ways—literally, as in conveying information aggregated, framed, interpreted, or otherwise packaged by an expert, or processually, as a process of inquiry modeled in a lecture that poses and answers a question, or as discussion (ranging in degree of control from socratic to freeform). In either case, you have Teacher at the head of a class, and that’s “class” as in “class system.” I don’t imagine that the respect due a working scholar is going to vaporize, but I do hope it will realign itself as one of the range of resources students have available to them, and I would hope faculty could realize what it is that we really want students able to do: to practice their own versions of research, our questioning of  the issues and the arguments offered our generation, and our efforts to rearticulate for our generation how to understand something carefully, insightfully, profoundly.

Needless to say, our methods have changed somewhat. We like our electronic databases, our citation software, our word processors, and our outsourced memory for factual data. It does beat card catalogs, 3x5s, the typewriter, and corrasible bond. But we still do the same analog-modeled research we did forty years ago, and we teach pretty much the way we were taught—though we like our laptop media players plugged into projectors, our powerpoints insuring a base level of adequacy in student note-taking, and the course management systems that relieve us from hand-carrying carbon-paper triplicate gradesheets, carrying books for short-term checkout to the reserve disk, and ferrying sheaves of precious articles to Kinko’s course packet desk. We’ve used digital to make the same old things easier and quicker and cheaper, but not different. Not much. And certainly not fundamentally.

Meanwhile, in analog land, people are enamored of “flipping,” so that  encountering “texts” and some version of professorial commentary (recorded? powerpointed? tourguided?) happens outside of class so that class time can be used for discussion or group work. But it’s still an isolated sliver of the day, it’s still a flow of curated contacts with material and curated responses to same, and classtime is still like the old ideal of a stone-built campus’s special, remote, and distinct place apart from the furor, craziness, and mongrelized attention that degrades, we like to think, ordinary daily life.

Wrong, wrong, wrong: these extraordinary young people are continuously immersed in multiple flows wherever they go, they navigate among them and mix them and learn from their “accidental” resonances as surely as John Cage was exhilarated by the inflow of all sounds in 4’33” and his other (relatively) more familiar-sounding works, or as street thespians or cable channel surfers or participants in 60s “happenings” or attendees at massive public events all are thrilled and thoughtful about the intensity of multifarious experience set free from artificial constraints of genre or occasion or protocol or what have you. If we really believe in lifetime learning, as opposed to the occasional vacation at Club Mod (chic university without walls) or the Ted Talks’ random infusion method of feeling up to date, we should practice life-in-time learning right now in our classrooms.

Because: the digital world’s continuousness and ubiquitousness means student skills in surfing and tagging and posting and mashing-up are all available right now, trainable right now, amenable to becoming their scholarly peripatetic philosopher selves right now. So consider these examples of reconceiving “class” time as something more akin to Studio than Lecture Hall:

• my students read John Richardson’s explanation of a “system dynamics” model for understanding how Sri Lanka went from poster child for development’s infinite potential to 35 years of violent revolutions. And come to class, and hear me talk about it, finally understand it, then forget it. Next year, they will read about it, see my engagingly witty blog about its sexiness, and then come in and work it like a wood working shop full of tools: all the pieces identified in his detailed model become “heuristics” they must answer in terms of another of our case histories with which they’re more familiar, the U.S. By the time they’ve worked through the model’s application, stage by stage over several hours, they “get it” viscerally, not abstractly. It has become part of the “society media” for recognizing the implications of this distortion, that change, or simply the total absence of Richardson’s key factors, feedback loops, and interacting factors. That’s very 21st century, as is the public sharing of what they produce—and far less insipid than what we hear from empty talking heads and soundbite bloggers rehashing hashtags.

• my students read Ashis Nandy’s The Intimate Enemy to learn how the British and Indian psyches coped, one way or another, with colonialism and its aftermath. He gives them many good thinking machines we can cite and exemplify in class. But what if, instead, they’d already read my blog to “get the main point,” and then came to class and worked together culling the thinking machines and using them on other cases, other problems, other phenomena? And shared publicly, and…

• my students read the marvelously iconoclastic work of James C Scott, an anthropologist turned guerilla warrior against conventional formulaic received standard versions of all things. “That’s interesting, I didn’t know that.” I don’t mind that response, but how does that change the way students think, work, live? On the other hand, when Scott shows them why development officers, trained for “seeing like the state,” therefore make a mess out of their aid projects, what if in Studio (where you make things instead of being classic or classified) they used his model on other Experts (Dis)Solving Problems? Case histories of how some can do no other than get everything wrong? Antidotes to acquiescence in the culture of failure and cynicism? Prescriptions for how to change absolutely everything? Like the nomadic cultures he studies in The Art of Not Being Governed? How would non-statist living translate to 21st century Americans? How would NOT being interpellated as “the kind of subjectivity linked to the state,” in Foucault’s rousing phrase, change the kind of being you thought you had and the kind of life you therefore lived?

You get the idea. Studio repurposes f2f time the way email and Course FAQs save it for something better than mere information. If we’re to teach people how to find and weave and blend and critique information and rethink how to pose the questions around which data buzzes like clouds of airborne nanobots replaceing the analog world of moths around a candle—then it’s time to have students immerse themselves 24/7 with their everywhere device in Blacksburg, Rabat, Istanbul, and the barely cooled down battlefields of Sri Lanka, let the talents of their social computing cross-fertilize their talents in thinking and learning and forge from that conjunction a truly 21st century form of education.

McCloudy Day Thoughts

As I read Scott McCloud’s chapter, a welcome return to a book spirited away from me by a student who “forgot” to return it (alas), I found myself thinking of John Cayley’s work. He started public life as a dealer in antiquities, a translator of ancient Buddhist texts, and a participant in avant garde poetic practice with literal art, and, then became, whew!, a practitioner of “programmable” art. He has interesting essays about the difference between Code (addressed to a processor) and Text (addressed to a reader who’s implicitly asked to accept it as “natural language”), among many other topics.

Like McCloud, Cayley is interested in the experiential dimension of digital literature (not his term)—its presentation to a reader, and its responses to a reader’s actions, and its temporal dimension in presentation, all call for a more complex Rhetoric than classic pomes on the page. Here’s a sample of a bulleted point from one of his ruminations (you’ll need to remember that “signifier” is the physical pointer to the “signified” that is its culturally determined content):

The emergent materiality of the signifier – flickering, time-based – creates a new relationship between media and content. Programming the signifier itself, as it were, brings transactive mediation to the scene of writing at the very moment of meaning creation. Mediation can no longer be characterised as subsidiary or peripheral; it becomes text rather than paratext.

Early hypertext writers were drunk on the way that hyperlinks made visible the call and response between the manifest text and its latent allusions to all kinds of cultural content, not to mention the rest of the work in hand, all affected by the attention span and proactive quotient in the reader’s participation. In going beyond such early enthusiasms, Cayley thinks about how, in the digital environment, mediation both connects and conditions contacts that fan out in many dimensions, in many registers of meaning, in many experiential dynamics as someone encounters a carefully wrought Code/Text.

What McCloud shows you by widening the x-axis of a comix frame, and what he dramatizes by the variations in lettering style and motion lines and (in other chapters) permutations among frame shapes on a page—well, it rhymes in a way with what people like Cayley think about when they program the Cave at Brown (a 3D immersive virtual environment: we have one!).

We’ve classically contrasted the experiential dimension of print literature and performances (theatre and music) and visual art and film/video: the Programmable Arts seem to be on the verge of an exponentiation from mashing up all of these at once. Instead of contrasting, as if different arts appealed to different sectors of the neuroscience of the brain or to different sensibilities or to different faculties in an individeual, programmability equips an artist to deploy all of these in ways that exploit their materiality for designed effects. Harder to repress the materiality of the canvas when you’re in it; harder to repress the physicality of language when you have to face it and work it; harder to valorize the conceptual over the aural or the haptic if feedback mechanisms engage all five of your senses. It all has a lovely potential to dash to bits the less imaginative aesthetic theories still treading the halls like ghosts of Artworlds Past.

Musing on a rainy day.

Professor Straight & the Computational

What I like about Turkle when she’s not having anxiety attacks that we’re only Alone Together is that she asks, at some deeper than usual level that reaches the zone of psychological agency, how people use computational capacities. I suspect that even with the material in that latest book of hers, one could show an alternative response to the material that has made her move closer to the pole of “Professor Straight” than of, say, “DJ Spooky.”

But back here in the material from The Second Self (ah, but only an embryonic “Professor Straight” would be counting, right? otherwise, always already at least two)—we have a lot of interesting ideas as she works on the kinds of (stupid, really) things the Professor Straights of the world are saying about kids and computers.

Which I organize as follows to make a little more clear the implications of what she’s finding when she asks: what do the users think they’re doing? It seems to me they are using technology on themselves or (at least virtually) on the world, and that the way of using she observes bifurcates between escapism and remixing, depending upon whether a person is feeling crushed by the contradictions in the “real virtuality” (Manuel Castells) of our simworld (what Professor Straight would call simply (and perhaps fatuously) “the real world,” or, instead, that person is empowered by the possibility of becoming one with the Technium (Kevin Kelly) and therefore using it in a natively digital way as opposed to the kind of digital tourism we see in the crushed ones.

What am I talking about, that’s your question? Think of the compensatory use of digital gaming, that way of getting a sense of power or agency or potency from playing a game (you get to do online what you can’t do physically), a way of turning on your inner inadequacies and making up for them; Professor Straight worries that losing oneself in game world will be a crime against personal development. The externalization of this is what Professor Straight worries about concerning violence, that players will imitate in real life the violence they engage in online. Both anxieties may have some truth to them: measuring up to performance standards in our world can crush someone internally, just as finding some way to fend off the tyranny of inequality and social immobility can flip one out into violently redirected anger.

But Turkle, in this piece, at least, resists Professor Straight on both counts, preferring instead to foreground those who take up technology in search of a way to remix the self or the world. Remixing programs and other species of IP (intellectual property, corporatists want us to call it) contests the bland closure and stupidly simplistic content of corporate IP-as-anesthesia and remakes cultural material and the life of the mind along far more creative lines than, well, inserting disk and pressing play. And our lawyer who uses absorption in gameplay to develop not the calmness of TM samadhi, but rather the profound intensity of focus that meditators call “Concentration”—here’s a case of remixing the self so that it is not “lost” in the scattered attention and exhausted creativity with the pieces one moves according to the rules of legal machinery. How very Bill-Viola of him?

There you go. When you think of TV as simple spectacle imposing itself on couch potatoes, you miss out on the answers you get when you ask people what use are you making of your TV watching? Professor Straight made tenure writing about how we’re all in a stupor from watching the Dick Van Dyke show; more likely, we’re in a stupor from the deadening routines of work-in-America (if you can still get it); now Professor Straight is making full Professor writing about how kids are losing their agency to escapist fantasy and the world of human interaction to violent exterminations of their world (and themselves). But, as Turkle notes, you get different answers when you ask people what use they’re making of digital technology.

Time for Professor Straight to retire on a buy-out of the deadwood.

Maybe “Recursive” isn’t a bad word after all

So I nearly wrote an entry last week wondering whether recursive worked relative to anything at all other than tricks on the order of animated gifs. Because, really, does anything ever really coincide with itself like that? Whenever you get back to where you started, neither you nor the start are the same. A bit like Heraclitus (you can’t step in the same river twice, he said, supposedly).

But then when we were asked to come up with a Metaphor of our Own (you’ve read that Forster novel, right?), mine was Dream Machine. It was an immedia machine which means that there would be no media mediating between brainwave and Dream Machine (DM) simulation of the brilliant poem, scherzo, or water color in my head. So a species of immediatism, just in case you’re a fan of early Hakim Bey. Unlike the dynabook and the iPad where you’re always putzing around with some plastic and glass and software algorithms to translate your conception into their midwifed birthing of a not-quite-your-conception.

Which means I recursed upon our earlier reading by Ted Nelson by coinciding (ok, well, stealing) with his book title. Though I was dreaming something different than his Xanadu of parallel textfaces. So while not recursion, exactly, or is that properly, perhaps it was a recursing of the perennial human tendency to project our frustrations onto a machine that would relieve them by fulfilling their thwarted desires.

Which would suggest that our fascination for what’s normally meant by "recursion" may be in fact an attempt to coincide with ourselves rather than drift forever in Heraclitian, Derridean self-difference. File under fool’s errands.

Truth is, I don’t want my kind of Dream Machine. It would be too much like a hypercompetitive parent or older brother who did all your ideas better than you could. Who needs that kind of grief, especially at my age?

Give me a pretty interface and a program that doesn’t crash and does maybe 60% of what I want it to. That’s about right for a midweek evening.

The Case of the Provincial Nostalgic

Nostalgia is a peculiar filter designed for emotional gratification. It works by reinventing the past as a dreamworld unconsciously designed to fend off the present or to generate some sense of security in one’s own sensorium. The nineteenth century, particularly in its later decades, was notorious for its invention of culture worlds that were awesomely wonderful alternatives to the ruthless industrialization that fretted the age’s gentler thinkers.

Provincial is another filter, also gratifying, that represses the Other, variously conceived in terms of region, era, ethnicity, or sensibility. It allows someone to reorient the world and all of humanity around the self, as the self…

When you put the two together, it gets either scary or annoying, depending upon what’s at stake. All of which is an elaborate sigh of exasperation with our editors’ commentary upon a piece by Kay & Goldberg presenting their dynabook. Their breathless page is the case of the Provincial Nostalgic in my title: they admire these two for all the wrong reasons in a way that is offensive to others at the time and demeaning, really, of Kay & Goldberg.

Having whacked the hornet’s nest with my baseball bat, let me explain the offense(s) taken:

  1. O, please: does anyone really think that in 1977, date of original publication, no one had ever thought of a handheld device that would do everything and be connected to the mother ship of data? That is the least commendable achievement of the piece. Scifi is littered with versions of the iPad going back for decades. Star Trek televised in beginning in 1967. The next year, any American with a tomorrowland kind of pulse watched Dave and Frank use their 2001 Kubrick edition of the iPad. Stanislav Lem, Isaac Asimov, and many others did the imagineering for which the editors so awkwardly laud Kay & Goldberg. They started giving patents on tablet machines for pen input in 1888. So, really, to praise them for thinking of all the things a handheld computer could do is nonsensical. If they really did “conceive the computer from a radically different perspective,” the question might be different from whom? Certainly not from an enormous host of people who could imagine a computer doing such things. Which insight takes us to point number two, now that a vast swath of the past has been restored to visibility.
  2. Different from… What we ought to laud Kay & Goldberg for is figuring out how to make a functioning prototype. That, really, is harder than just imagining the device itself. How do you make parts small enough, capable enough, and fast enough? There wasn’t much on the shelf that you could use, though the idea of miniaturization had been around a while: transistor radios were demonstrated in 1954 and sold in the billions in the 1960s and 70s. The noteworthy factor here is that Kay and Goldberg presided over a team that figured out the software/hardware designs and the marriage of the two. That takes some doing. Their designer-selves drove part of the process in terms of ease and speed of use, their engineer-selves drove the concern with capacities and procedures, their entrepreneur-selves drove imagining it from the point of view of mass users wanting a “multimodal” device and children being able to use it.

It takes engineers, designers, and entrepreneurs to make real things that real people can use, Apple Computing being a case in point. All three do imagineering, each contributing a key piece of the vision, and if you lack any one of the three, you don’t get there (see android tablet culture and Surface un-design for useful warnings). What’s remarkable is that the Xerox team had a synergy going among the three legs of the technology stool.

Not that they conceived the idea of a handheld, not after all the reruns of tricorders and communicators and universal translators. But that they made a functioning device out of stuff that could barely do it, and that they won the battle that Microsoft’s Courier team lost to the spreadsheet and floppy drive guys. No doubt the editors want to imagine Kay and Goldberg as versions of their artistic digitalizing creative selves. But, really, the Xerox team was knee-deep in wires, busted circuit boards, and usability testers going wild with Smalltalk.

Living History

1974. The Altair (named, they say, for a location in that week’s Star Trek) computer appeared on the cover of Popular Electronics, and Paul Allen told his pal Bill Gates that microcomputers were on the way. Check.

Also, same year, Ted Nelson publishes his thoughts on user interface and, more generally, on how computers should work. In 2010 what he was looking for showed up on the market with an “ten-minute system,” a “prefabricated environment carefully tuned for easy use,” and an interface in which you touched stuff on the screen to move things along (though without Nelson’s beloved lightpen). And critics didn’t like it any better than they’d liked the iPhone or the iPod when they first came out. Check.

Interesting to see Nelson as a culture-warriorer against the Geeksquad’s ethos—the intuitive versus the infinitely tinkerable, the touchscreen versus the command line, the creative non-technical versus the programmer technophile… And the closed garden for creative play versus the bloatware infinity of buttons and palettes and options.

He has a cardboard mockup, we have OSX and iOS. He has Thinkertoys (p. 332), we have Scrivener which offers all seven of his key traits for presenting “‘views’ of the complexities in many different forms” so that we can use the computer as a “decision/creativity system.” He has Parallel Textface™ and we have Wikis and Snapshot versioning comparisons. He has the Ecit Rose™ and we have the Toolbar. We both have Undo, History, He thinks “the mechanisms at the computer level must be hidden to make [user clear-mindedness] work,” iOS hides everything but the doing (of painting, movie-editing, mind-mapping, writing, and various other forms of, as he calls it, “collateration”).

We both also have the issue of dreaming something worth dreaming with our liberating computers. Which is why I’m looking forward to our getting into those who are doing the dreaming in this seminar on “awakening the digital imagination.”

Two ways to miss the point….

It’s easy to lump together Vannevar Bush & Douglas Engelbart because they’re both, shall we say, on the geeky side imagining how to plug this into that and physically manage transfers and copies and recordings and the like. And they were writing essays at about the same time. But that agglomeration misses the point of the difference between them, and between both of them and the most interesting innovators around right now.

I could fall back on the last post and say that on a continuum that matters, Bush has a much lower ratio of digital to analog thinking than Engelbart shows us (almost entirely, the memex is a thing that a human being uses to do certain practical tasks more efficiently, a thoroughly analog storyline, whereas Engelbart borders upon delirium as he loses himself into the network of associations that explodes outward from his first simple notchings of notecards).

Or I could rely on mushy terms like system or network to explain the difference—except that both words are in themselves acritical, failing to distinguish clearly for all readers, though definitely so for some readers—all because of two limiters from conventional western thought. That is, conventional thought is comfortable with both “system” and “network” as long as it can define them in its own way.

So here is an effort to distill something useful from long teaching of postmodern thought, new art in several mediums, and digital literary culture: what are the two easiest (or is that laziest?) ways to miss the point at which one might think “like a native” of the digital rather than as an analog interloper taking a quick look around and inadvertently reconstructing the familiar rather than really seeing what’s emergent.

Pay no attention to the man behind the curtain (there’s no one there, and no curtain, and no not-curtain, and…)

You can think of network as lines connecting existing nodes. Which is not really a network at all, but more like a bunch of classically conceived entities communicating, more or less. It’s so common-sensical it’s almost hard to see why I’d bother talking about it. But, classically, we have thought of things as if they were separate, even autonomous, maybe even transcendental entities with a definable essence with just enough ineffability remaining to make them each seem unique. So, in other words, you have nodes that pre-exist the so-called network that links them, like Bush’s human being who sits down at his [sic] Memex. If, on the other hand, you’d crossed the great paradigm divide (by tapping into Dadaism rather than surrealism or, worse, canonical modernism, or by following the Oulipo, or by following postmodern art, or post-structural theory, or actually reading Nietzsche or Heraclitus or anyone of a number of insurgent thinkers, or…), you might conceive of nodes as epiphenomenal effects of interconnections. That is, “nodes” are effects of connections, nanosecond by nanosecond products of the sum total of relations intersecting at any given point. If rates of change are slow enough, we mentally construct an entity out of a series the way we take rapidly flashed stills as a smooth moving picture, as we used to call film.

For ontologists on the far side of that great paradigm divide, “humans” are wetware whose flash memory operating systems are continuously updated by experiences within culture, “within” because there’s no “without.” And since those lines of interconnection are themselves in continuous flux, pulsing and expiring, rerouting and transforming, none of the stabilities inherent in the conventionally conceived network pertain. Be careful: you may be changed by a randomly accessed input—which is not a new phenomenon (books change people), but just accelerated in terms of the number and diversity of exposures to which we now have access.

So no doubt it gets worse, right?

Ok, so a truly digital network is not a library reading room in which you can walk in from the outside and sit down at a table to discover another object that changes you. You are, instead, always already connected and undergoing digital reconfigurations of body and consciousness. “You” can’t step in the same river twice (Heraclitus) because that so-called “you” that sticks that foot in the “second” time is not entirely the same.

But all this talk about “you” signals the second way to miss the point: if you try to begin with (whatever kind of) “you” and then add to it a network, you’re still using a logic of independent (autonomous? transcendental?) boxes to limit the scope of what you’re talking about, the way we are classically scissoring out of the world one shape to talk about as if it were a meaningful “it” once we did so: it’s meaning is more like a function of the systems of concepts and symbols and processes valorized by our little moment in cultural history. Or, to get concrete about it, the story isn’t about a man [sic] sitting down at Bush’s memex, it’s about a human-memex aggregate (system) working within larger aggregates—networks of networks, systems of systems.

The more we are aware of how these systems cycle through each other, the less provincial we are in thinking about a human who augments itself by outsourcing some memory or some correlation functions. When Engelbart is talking about “associative linking” and “a complex symbol structure that would grow as the work progressed,” he’s walking across the paradigm divide from Bush’s world to the one in which you cannot understand where this is going unless you no longer see singular units (human, memex, data card) but a field of interrelations in which linkages and minds and societies are all being produced as the work progresses bit by bit. It’s dazzling, yes, because it’s so vast and complicated, each little (not)thing a function of all the other (not)things; it’s also discomfiting, because of what happens to that old oddly comforting notion of the human as an entity with an inner nature that is its own unique and enduring identity: it’s gone, poof. In its place is whatever we are being as we surf the network and play the system and jiggle around the possibilities of rewiring and reconfiguring.

The more we do such things, the less we resemble human beings who functioned in a different network of networks (which, perhaps, we nostalgically reify as we want it to be rather than as it now appears to “us” as we look back upon eras in which all this seems to us easier to finesse “back then”). But, really, consciousness changes.

We see noteworthy differences between generations now that lead us to wonder if videogames are destroying our children. Analog logic, analog ontology: if you sense the issue at all, then, yes, those children are already (partly) destroyed—or reconfigured, though videogames are only one set of forms within which that’s taking place. Read Engelbart: “process structuring limiting symbol structuring, symbol structuring limiting concept structuring, and concept structuring limiting mental structuring…” It’s a 1962 usage to say “limiting”—something like configuring is more like it. But he’s recognizing that to enter intimately into the technologically amplified digital world of interrelations is to see that structuring—understanding that word as a cumulation of all interrelations pulsing away at a given moment—effects the changing nature of everything amidst those interrelations in a moment to moment way.

Which is why I like the work of “programmatological” literary figure John Cayley more than I like the work of those who do familiar things (writing analog things in analog logic) on a computer and call it “new.” Or Talan Memmott, whose “Lexia into Perplexia” is all about the network effecting interdependent reconfiguring of the network and its nodes-of-the-moment (or, as he calls them the “cell….f”). They give us the rare gift of work from the other side of the great paradigm shift at a historical moment when we, like the digital pioneers we’re reading just now, are an awkward and perhaps ungainly mix of residual conventional culture and emergent “digital” ontology.

Zoom zoom.

 

 

Digital?

What strikes me repeatedly in these readings is a certain stumbling at the threshold of analog and digital, two very different Ways. It’s about more than how the numbers display on your clock. How to say the difference succinctly? I’m tempted to be quasi-poetic: the analog is what you’re used to, the digital is what amazes. We are, still, a culture of analog thinkers and we work in analog ways that are simple extensions of how our bodies do things.

Digital is something else altogether and, not coincidentally, it’s like Capitalism: everything and anything is everywhere and anywhere instantaneously and continuously, the same way capital moves from short-term investment to short-term investment, relentlessly probing the system for quick profits that make up an astonishingly increasing percentage of “income.” Traders who are still working analog soon drive cabs. Goldman Sachs rules the world because its computers autoshift capital from investment to investment with nano-second triggers that maximize their profit at the expense of both their rivals, who miss out, and the entire analog industry—that world where actual people go to work and make stuff. If you think in analog, you think of time as duration, hands of a clock sweeping around the world measured out in hashmarks. If you think in digital, there is no time, only the Now of systems cranking away within systems instantaneously reconstituting your world continuously: Heraclitus gorged on amphetamine.

Licklider keeps thinking in analog ways about his anticipated digital environment. He titillates himself thinking about speech control as cutting edge (“Siri, define ‘analog'”). What would be digital? Not interface, certainly: mouse, touchscreen, it’s all analog. See your digits’ smears on your ipad screen? that’s the grease of analog bumbling around on analog. Do you think Goldman Sachs is happy with the analog interfaces in its trading system, or that it’s zeroing them out as much as possible? Instead of interface (the “enter” between two faces), think flows between neurons and wireless. Not an analog body-self and an analog computer, but a high signal-to-noise environment of think/do flows. All the talk about screens and updating workstations is analog box-think. Move along folks, nothing happening here.

Another way of saying this is not to overly worry about Comcast imperiling the open internet: they’re an analog distribution system about to be leapfrogged by 5G and 6G wireless the way Ma Bell style telephone systems never happened in the hinterlands beyond Globalization’s most venerable nodes. Another way of saying this is that you won’t catch up to that eleven year old with a controller in his hand by learning which button does what in each game. S/he’s half-digital: he has no idea about it… it just flows.

The digital isn’t just a new technology added to our world to which we’ll adjust, like trading in the buggy for a Model T. The Graduate had the right word but the wrong definition of it: the future is plastic, but not the analog stuff we’re recycling. Think instead of the word’s art world sense, kinda like play-doh: as in, the digital malleability of the now and the here, the everything at once every way at once, the network that continuously reconstitutes its nodes as the system cranks away faster than light, ours anyway. Hello, node: what are you now? And now what? And…

You see the analog also in the editor’s impoverished view of the anthology’s opening Borges story: it’s read as a precursor of hypertext. Um, no. More like the holographic universe of realities that form as we perform them. The way light is particles if we look for it that way, but wave or field if we look at it those ways, or the way particles separated by distance, it would appear to the analog mind, “know” what each other is up to.

Ultimately, that is, the digital is requires field-theory thinking rather than analog’s particle-thinking. But that’s another tome. Wiener and Licklider didn’t quite make it to the promised land.