I left a long comment over at Lauren’s blog last week on some excellent points she made re: Vannevar Bush. As she observes, his vision in many ways surpasses where we are today…though at least some pockets of people are actively trying to overcome obstacles of technology (inability to find/remember the information we need, for example) and society (copyright, ownership, etc.) to get there. As part of that comment I noted that I had recently:
stumbled on from an author asserting that technology is making us smarter and more adept at leveraging information rather than duller and incapable of remembering things (http://bits.blogs.nytimes.com/2013/09/24/one-on-one-clive-thompson-smarter-than-you-think/?ref=technology) because it’s an extension of the evolutionary tactic of humans to process and store information socially. I’m not sure I had ever considered my use of Google in that way – as an extension of what would be a natural tendency to scaffold my memory from others and from manual devices of my own creation (like my monitor covered in post-it notes)…
I’m pleased that there appears to have been some foreshadowing in that comment because this week we encounter two very different, one might say radically divergent, perspectives on the symbiosis of men and machines. I ended up finding it quite Janusian, which I’ll discuss more at the end of the post, but, to preview what I mean for those who haven’t encountered the term before: we entered the zone of competing, but valid, perspectives on the future of men and machines.
Wiener, in a passionate but conversational way, implores us to realize that while we CAN provide for almost anything with machines that does not mean that we de facto SHOULD. This certainly connects with our discussion in class last week about relativity and power and control. Wiener felt that the common person would inevitably be sacrificed to the vagaries of capitalism if attention were not paid to WHY a technology was created and what we truly want from it. He provides what might on the surface appear to be a trivial case:
I know an engineer who never thinks further than the construction of the gadget and never thinks of the question of the integration between the gadget and human beings in society
Really, one might ask why that is problematic. If I’m designing a standard upgrade to a system, for example, why would I pause to consider whether that upgrade fits into a larger narrative about machines and humanity? Our conversation about the iPad in class is a nice practical example–several people observed that the question of whether it’s really “better” to have access to these technologies is difficult to answer in general and impossible if you take the question outside of a developed world context. It would be meaningless to ask someone with no access to electricity and internet (let alone water, food, healthcare, etc.) whether the iPad is a helpful advancement for mankind.
The question, therefore, of WHO decides was repeatedly raised. Well, Steve Jobs, apparently, though from a practical perspective it’s unlikely that there was no consideration of the integration between the gadget and human beings–when corporations are developing these products it’s because they see a bottom line. I don’t mean that in a completely cynical way, but I don’t think the fact that advertising and marketing consultants evaluate the potential for market penetration means that there has been a meaningful demonstration that a particular technological advancement is “Good” for humanity in the “what we’re getting is what we really want” sense Wiener mentioned. That’s a long way of saying that I think the responsibility does rest with the collective to be critical consumers. This, in turn, is a challenge because, as Wiener eerily observes:
We shall have to do this unhampered by the creeping paralysis of secrecy which is engulfing our government, because secrecy simply means that we are unable to face situations as they really exist
Indeed. If the recent NSA scandals have shown us anything, it is that we probably cannot trust the government to be a trustworthy arbiter of these questions. Not because there’s deliberate harmful intent, but because the focus of our security agencies is, by their nature, to both protect us from and exploit advances in technology. As we have seen, that leads to ethically, legally, and constitutionally questionable behavior in some cases. Wiener’s rebellion against the unthinking technological endeavors of the military industrial complex is easy to understand–in taking the logical next step it is easy to push aside or completely forget the moral and practical consequences of that step.
So we come to Licklider. Now, Wiener’s narrative does not at all resonate with Licklider, who sees progression towards total symbiosis as inherently and almost unquestionably desirable. He freely admits that in the future machines will probably surpass the collective abilities of men, but sees this as an exciting and worthwhile progression that will lead to achievements the likes of which we can hardly imagine. Many of the examples he gave of future symbiotic relationships with machines have, in some form, come to pass. His optimism was, I admit, rather refreshing after the dark future imagined by Wiener. Wiener’s cry of “there is no Santa Claus!” was echoing in my head when I stepped into Licklider’s hopeful and analytic approach to the inevitability of symbiosis. But in spite of my desire to completely get on board with him, the thing I can’t shake is Licklider’s beginning analogy of the fig tree and the worm. He opened his article by describing this truly symbiotic relationship:
The fig tree is pollinated only by the insect Blastophaga grossorun. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree…
This is haunting me. I think it might be fair to argue that Licklider wasn’t necessarily asserting that he hoped one day man and machine would get to the point that man would literally die without the machine (and vice versa, though that is a standard state of being for machines), but this was the analogy he chose. He chose true symbiosis. And I find that concept alarming, regardless of the relentless optimism of his piece. I cannot countenance the idea that I might someday die if detached from a device–I think this question of who decides and how we know that the technology decisions we’re making are good for us are so crucial and so far from satisfactorily answered that it is difficult for me to divorce Licklider’s perspective from those issues. Yet, I tend to be an optimist when it comes to technology. Like Licklider (and as you can see detailed in my last post), I see great potential in the pace of technological development we see today. My major hangup is that I want progress to be not only deliberate but thoughtful in a philosophical sense.
So, what does it mean to see both? I would assert that, in the end, this is really just a Janusian opportunity–I clearly see the advantages and share Licklider’s optimism that we are heading towards the most exciting and important era of mankind to date, yet I am very much afraid of how this might go if improperly managed and can see that dark, apocalyptic future of which Wiener warned. I’ve said it before elsewhere, but I do see this as an era where we NEED to look both ways at once–understanding only one dimension or facing only one future will not suffice. This might just be a more sophisticated way of saying that I’m a fence sitter, but I’d prefer to call it Janusian thinking. This unquestionably puts my comment about Google’s new search algorithm as a substitute for human scaffolding in a new light–it might be an exciting way to look at it….but I certainly can’t pretend that doesn’t have larger implications.