26 Feb 2013 No Comments
Interesting reading and reflecting upon Man-Computer Symbiosis… I named this blog “symbiosis” because of how deeply entwined I experience my own life with technology. In many ways, day in and day out, moment to moment, I am in and out of technology – communicating, referencing, looking up, sharing, interacting – I feel one with it. Clearly my life is enhanced by it. The question is: “Is technology enhanced by my use of it?”
In some ways, the answer is clearly “yes.” Whatever algorithms and tracking mechanisms Google and Amazon are employing, their interfaces learn from my patterns, and in turn create clear paths of ease for me in future interactions. In turn, they benefit more greatly from my usage.
In other ways, the answer is as murky as a swamp. Licklider’s article made me contemplate the design of computers – in a way that parallels human interactive behavior. Clearly, we have evolved technology beyond his imagination, and addressed the major areas of capacity concern. We’re beyond significant storage and organization challenges, even indexing and retrieval concerns. What we haven’t evolved past is the design – programming machines to be ultimately helpful to us. What’s crystal clear to me is that this area suffers from our inability to be clear about things in our own thinking – and thus our relationships and communication aren’t as effective as they could be, even without technology. Clearly we’re still not ready or able to empower technology with these capabilities.
True man-computer symbiosis requires providing the machines with a framework and guidance for critical thought – as the article states, referencing appropriately relevant past data/case information, weighing assumptions, and making decisions appropriately based on that. We can’t just program to every imaginable scenario – life provides unimaginable scenarios, to which the human mind reacts in the moment and responds. Often we can’t adequately articulate the information considered, how it is weighted, all of our decision criteria and specific desired outcomes. These are problems that run rampant in supervision and leadership, and organizational effectiveness scenarios. We can’t do it for each other, let alone teach a machine how to do it.
I found it interesting also that Licklider referenced the difference in human vs. technological instruction-making. That as humans we think about and give instruction in terms of goals and outcomes, but technology requires a specified process or course of action. In my experience (again, primarily in organizational settings and with reference to leadership responsibilities), so often we fail to clearly articulate the desired outcomes or goals in a way that makes for meaningful translation and understanding on the part of those who will determine a course of action. So on one hand, we may articulate goals (but not well), and on the other we may specify a course of action (with no clear sense of where it will get us and whether that is the desired end point).
My head is spinning. At least we haven’t yet progressed at a rate sufficient to throw Sarah Connor into fits and nightmares. We still have a lot to learn about how our own thinking works. In order to be able to design and enable technology to best serve us – and then to benefit through further learning and evolution – we need to design it in a way that suits natural human behavior; not in a way that humans need to adjust to fit.