How will we build a Third System of education?

I have recently been reading about, as Mike Gancarz puts it in Linux and the Unix Philosophy, “The Three Systems of Man”. This is, to my understanding, a fairly well documented and often-observed concept in software design, possibly first referenced by Frederick Brooks in The Mythical Man-Month when he coined “the second system effect“. Gancarz seems to take the concept further, generalizing it to any system built by humans.

Man has the capacity to build only three systems. No mater how hard he may try, no matter how many hours, months, or years for which he may struggle, he eventually realizes that he is incapable of anything more. He simply cannot build a fourth. To believe otherwise is self-delusion.

The First System

Fueled by need, constricted by deadlines, a first system is born out of a creative spark. It’s quick, often dirty, but gets the job done well. Importantly it inspires others with the possibilities it opens up. The “what if”s elicited by a First System lead to…

The Second System

Encouraged and inspired by the success of the First System more people want to get on bored and offer their own contributions and add features they deem necessary. Committees are formed to organize and delegate. Everyone offers their expertise and everyone believes they have expertise, even when they don’t. The Second System has a marketing team devoted to selling its many features to eagerly awaiting customers, and to appeal to the widest possible customer base nearly any feature that is thought up is added. In reality, most users end up only using a small fraction of available features of The Second System, the rest just get in the way. Despite enjoying commercial success The Second System is usually the worse of the three. By trying to appease everyone (and more often then not, by not understanding , the committees in charge have created a mediocre experience. The unnecessary features add so much complexity that bugs are many and fixes take a considerable amount of effort. After some time, some users (and developers) start to recognize The Second System for what it is: bloatware.

The Third System

The Third System is built by people who have been burned by the Second System

Eventually enough people grow frustrated by the inefficiencies and bloat of The Second System that they rebel against it. They set out to create a new system that contains the essential features and lessons learned in the First and Second Systems, but leave out the crud that accumulated by the Second System. The construction of a Third System comes about either as a result of observed need, or as an act of rebellion against the Second System. Third Systems challenge the status quo set by Second Systems, and as such there is a natural tendency to those invested in The Second System to criticize, distrust and fear The Third System and those who advocate for it.

The Interesting History of Unix

Progression from First to Second to Third system always happens in that order, but sometimes a Third System can reset back to First, as is the case with Unix. While Gancarz argues that current commercial Unix is a Second System, the original Unix created by a handful of people at Bell Labs was a Third System. It grew out of the Multics project which was the Second System solution spun from the excitement of the Compatible Time-Sharing System (CTSS), arguably the first timesharing system ever deployed. Multics suffered so much from second-system syndrome that it collapsed under its own weight.

Linux is both a Third and Second system: while it shares many properties of commercial Unix that are Second System-like, it is under active development by people who came aboard as rebels of Unix and who put every effort into eliminating the Second System cruft associated with its commercial cousin.

Is our current Educational Complex a Second System?

I see many signs of second-system effect in our current educational system. Designed and controlled by committee, constructed to meed the needs of a large audience while failing to meet the individual needs of many (most?). Solutions to visible problems are also determined by committee and patches to solutions serve to cover up symptoms. Addressing the underlying causes would require asking some very difficult questions about the nature of the system itself. Something that those invested in it are not rushing to do.

Building a Third System

What would a Linux-esq approach to education look like? What are the bits that we would like to keep? What are the ugliest pieces that should be discarded first? And how will we weave it all together into a functional, useful system?

Digital amplifier: the tweet heard ’round the world

Sometimes, in the face based modern world we live in, it feels like we’re living in the future. But all it takes is the watchful eye of the Internet, and specifically, its uncanny, sometimes disruptive tendency to amplify lurking social ills to remind us we are still very much in the past.

Last week, PyCon nearly ended quietly, without causing much of ruckus, as all good annual gatherings of open source software developers strive for. The organizers of PyCon understand the importance of diversity in the technology field, a currently white male dominated field, have worked hard to create an environment that is open and welcome to everyone, and in case there’s any confusion, they have a published code of conduct.

So when Adria Richards grew frustrated with two men making lewd jokes behind her at a closing talk she snapped their picture and tweeted

Moments later, PyCon staff saw her tweet, responded and escorted the two men into the hallway. The situation was resolved with minimal disruption. It would have all been finished, and we wouldn’t be still talking about it now if it hadn’t been for the first inappropriate response to the, up until this point, fairly minor ordeal.

The company for which the two men were working for, and representing at PyCon, made the decision to fire one of them. The company sited multiple contributing factors, not just the joke, but the timing was extreamly poor on their part if they really didn’t want to connect the termination to the joke incident.

And then the Internet exploded.

Adria Richards got a man fired. A man who had three children to feed. The Internet was not pleased. And to show its displeasure it sent Adria death threats, rape threats, racial epithets and suggested that she consider suicide. A group of hackers, some claiming to be Anonymous, initiated a series of DDOS attacks on her employer’s servers demanding that they fire her for retribution.

And because SendGrid, the company employing Adria, had no spine, they gave into the mob and publicly fired her. It was the easy thing to do, after all.

Justice served?

Bloggers the tech world over chimed in with their support or critique. Many asking whether she should have posted the photo of the two men and how she should have handled the incident differently, in a more lady-like fashion. Many jumped on a post by Amanda Blum that proved Richards “acted out” like this on more than one occasion, though Blum mentioned that she does not like Adria personally, and criticized her actions at PyCon, she did bring up the point that

Within 24 hours, Adria was being attacked with the vile words people use only when attacking women.

And this is the real issue, I think. And the bashful excuses from members of the tech community (both men and women) that “this is just how tech conferences are”, and “she should have a thicker skin”. The voices that suggest she shouldn’t have responded because the lewd comments were likely not directed at her seem to miss the point completely.

But at least we’re talking about it. Soon after the event the organizers of PyCon put the Code of Conduct up on GitHub, a popular open source hosting service, and invited members of the community to collaborate on changes in light of recent events. The community responded by adding language to the policy that prohibits public shaming. This is not unreasonable, and probably desirable and consistent with a “innocent until proven guilty” mentality. But unless a clear, easy communication path is given to report incidents as quickly and efficiently as twitter, in a private manner is provided, this could also be seen as a measure to silence others who may feel the need to speak out about poor conduct, but for whatever reason (and there are many) do not feel comfortable addressing the individuals directly.

The issue is not limited to sex or race, it is a larger one. Folks who are empowered by the status quo, whether they’re conscious of their priveledge or not, do not like the status quo challenged. Christie Koehler blogged about the incident from that perspective

It’s not easy because the tactics available to those who oppose institutional oppression are limited and judged by the very institution that is oppressive.

Those who benefit from the status quo, whether they realize it or not, have a vested interested in maintaining that status quo. That means working to ensure that any threat to it is rendered ineffectual. The best way to do that is to discredit the person who generated the threat. If the threat is the reporting of a transgressive act that the dominant social class enjoys with impunity, then the reaction is to attack the person who reported it.

And when it comes down to it, the vast majority of the negative backlash against Richards and her company (and none that I’ve heard of towards PlayHaven, the company that actually fired the male developer and started the whole fiasco) comes down to defending the status quo with a passion. People will fight for their place of privilege. They will fight hard and they will fight dirty.

And the very sordid nature of their fight will continue to prove unequivocally why we need to keep challenging the status quo until we create a world that is welcoming to all.

More reading:
Why Asking What Adria Richards Could have done different is the wronge question
Adria Richards did Everything Exactly Right

We are the medium? (director’s cut)

A couple weeks ago while reading Laurel’s, “The Six Elements and the Causal Relations Among Them” I had a bit of a moment at

…the orthodox view of Aristotle’s definitions of spectacle and melody leaves out too much material. As scholars are wont to do, I will blame the vagaries of translation, figurative language, and mutations introduced by centuries of interpretation for this apparent lapse and proceed to advocate my own view.

As I went through that passage I thought about Aristotle starting the discussion over 2000 years prior, but rather than focusing on Aristotle as a conscious, active agent, my brain took an interesting twist. For a brief moment the message itself, in this case, a conversation regarding the nature of drama, was a full fledged organism. Living, moving, evolving. It depended on Aristotle for survival, without a medium, a message is… I will leave that as an exercise for the reader, but regardless, to the message, Aristotle was no different than the cells that make up our body are to us. Sure, I know without the cells that make up my body I (whatever “I” is) wouldn’t exist, but I don’t particularly care WHICH cells are part of the structure that my “I” sits on and likewise, this message about drama was indifferent to the fact that Aristotle was Aristotle. In fact it probably didn’t even bat an ‘i’ when it hopped off of Aristotle onto its next host, morphing a bit in the process, and onward until at some point in its ongoing life, Laurel happened to pick it up and it experienced yet another one of countless moments of evolution.

To the lifespan of an idea, at least the potential lifespan, a human life lasts a mere instant. Whether or not a message experiences a lifespan that long depends entirely on its survivable in the environment at the time. Yes, this sounds a lot like Richard Dawkins’ concept of a meme and no doubt the thoughts about memes that I had bouncing around in my head played some part in triggering this shift in perception from the human carrier as the source/center/agent to the message itself as the agent.

And it was a humbling experience. To an immortal message human lives are blinking in and out of existences continually. And just as we shed cells that are replaced by new ones, so too do new brains fill in the gaps to hold the message aloft as old ones die off. If we view the message as a collection of juggling balls it is alive and well while the balls are in the air. Someone has to be there doing the juggling, but it doesn’t particularly matter to the message who that is, just as long as when the current entertainer reaches the end of his/her short time someone else is around to catch the balls before they fall. Viewed from above then, the planet appears to be covered in morphing, swelling sea of color, that upon closer inspection is made up of countless individual juggling balls seeming to float around and interact with each other on their own accord.

And what if that’s all any one of us, as an individual is, just a medium for the message?

I suppose depending on your frame of mind that could feel like a depressing thought. To me it wasn’t and isn’t. In fact, I would go far as to say I draw upon that metaphor to stay motivated and find meaning in my own life. Because if it is true that I am a medium, and the message is what matters, then I’m part of something bigger than I could ever grasp on my own. I have the potential to contribute to something that will have lasting effect on the world and while my individual life may be relatively short, it will not be without purpose.

P.S. This turned out to be way more meaning-of-lifey than I had intended.

P.P.S. 42

About Time: Idioms About Time

TL;DR:

In the comments below please post, in your native language, or a non-English language in which you are fluent:

  1. how you would ask someone what time it is, and the literal word-for-word translation into English
  2. how you would ask someone where you are and the literal word-for-word translation into English

I wonder if I should stop being surprised when topics I’ve discussed separately with separate people all start to relate. On Monday I talked about idioms in ECE2524 and made some comparisons between idioms in programming languages to idioms in spoken languages. As I thought about examples of idioms I noticed there were quite a lot about time:

  • on time
  • about time
  • in time
  • next time

just to name a few (I’ve somewhat intentionally left out more complex examples like “a watched pot never boils”, “better late than never”, etc.). Today in vtclis13 we discussed McCloud’s “Time Frames”, a comic that explores the various ways time and motion are represented in comics. Inevitably we talked about the different ways of talking about and perceiving time, from the relativistic physical properties of the dimension, to our own personal perception of the passage of time, and how in both cases the rate of time can change based on the environment. Time is such a funny thing. We often talk about it as if we know what we’re talking about and we take various metrics for granted: In the U.S. what is it about taking 16 trips around the sun that makes someone ready to drive a car? 2 more orbits and we’re deemed ready to vote, and after a total of 21 orbits, after we have been on the Earth as it has traveled through about 19,740,000,000 kilometers relative to the sun, we are legally able to purchase alcohol.

But if Einstein’s forays into relativity have taught us anything it is that nothing about time is absolute as we generally have an intuition for. And so I became curious about the idioms we use to talk about time and how they differ from culture to culture, language to language. Dr. C put my thought into a question: “Are idioms about time especially diverse?”. And so, through this little survey, I would like to explore that question by gathering some time idioms in the comments section, please refer back to the first paragraph for specific instructions!

About Time

Reading the “Time Frames” comic about depicting time in comics made me think about Brian Greene’s The Fabric of the Cosmos in which he uses several metaphors and visualizations to help explain the nature of this thing we call time. We tend to think we know what “now” is, and think of it as a snapshot of the current state of the world. That model suffices in our day-to-day lives quite nicely, but it isn’t a very good model of the concept of time on a universe-sized scale. Einstein’s famous Theory of relativity states that time and space are closely related, and that perception of both time and space is relative to the observer. The concept of “now” is also relative. Greene uses the metaphor of a loaf of bread, the long axis representing time, and a “slice” of the loaf representing an instant in time across a 2D universe. The angle at which the bread is sliced depends on an observer’s relative motion, with a maximum angle of 45 degrees corresponding to a maximum velocity of the speed of light. Two observers, Bob and Alice at different relative velocities would have slices at different angles, and so their “now” slices would intersect at some line in space. In Bob’s “now” some events in Alice’s “now” haven’t happened yet, they are in Bob’s “future”, while others are in Bob’s “past”. Time is an elusive concept, just when we think we know what we’re talking about we get hit with something like “my ‘now’ isn’t the same as your ‘now'”. It’s no wonder there are so many ways to depict its passage in the comic medium!

On a slightly related tangent, the medium used can have some interesting affects on our perception of time and motion. In this video, recorded at 25 frames per second, a stream of falling water appears to freeze in time, or even flow backwards, when it interacts with sound waves at or near 25Hz. It’s not really an optical illusion, more of a media illusion.

Humans in the loop

Today’s hot article in the local twitterverse is a New York Times piece called Algorithms Get a Human Hand in Steering Web. I discovered it from a tweet by @GardnerCampell, also a beautiful retweet by @mzphyz:

Above all: Algorithms are human constructs, embodiments of our thought & will.

Which really sums up this entire post, so for the TL;DR crowd, you can stop reading right now!

The article mentions a number of examples of human-in-the-loop algorithms currently being employed on the internet, notably in Twitter’s search results and Google’s direct information blurbs (not sure what they call them, those little in-line sub-pages that show up for certain search terms, like a(ny) specific U.S. president, for example).

What I found interesting was that the tone of the article seemed to suggest that the tasks humans were doing as part of the human-algorithm hybrid system were somehow fundamentally unique to our own abilities, something that computers just could not do. I’m not sure if this was then indented tone, but either way, I found myself disagreeing.

Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them.

True, but I would argue that our own brains are “literal-minded” as well, there are just layers and layers of algorithms running on our network of neurons that give the impression of something else (this ties in nicely to a post by castlebravo discussing what, fundamentally, computing is). I think the underlying reasons we have humans in the loop are closely linked to the next sentence:

Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning.

Not only is spoken language ambiguous, but we lack a solid understanding of reasoning, or how our brains work. And we, after all, are the ones programming the algorithms.

In the case of the twitter search example, it struck me that all the human operator was doing was something like this:

if (search_term == 'Big Bird' and current_time is near(election_season) ):
   context = politics
else
   context = 'Sesame Street'

which looks rather algorithmic, when written out as one. Granted, this would be after applying our uniquly qualified abilities to interpret search spikes, right?

if instantaneous_average_occurrence_of('Big Bird') is significantly_greater_than(all_time_average('Big Bird')):
    context = find_local_context('Big Bird')
else
    context = 'Sesame Street'

Of course the find_local_context is a bit of a black box right now, and significantly_greater_than may seem a bit fuzzy, but in both cases you could imagine defining a detailed algorithm for each of those tasks… if you have a good understanding of the thought process a human would go through to solve the problem.

Ultimately, humans are only “good” at deducing context and nuance because of our years of accrued experience. We build a huge database of linked information and store it in the neural fabric of our minds. There isn’t really anything limiting us from giving current digital computers a similar ability, at least at a fundamental level, and theoretically, as our advances in hardware approach the capabilities of an “ideal computer” (one that can simulate all other machines), and our understanding of human psychology and neurology advances, we could simulate a very similar process to the one that goes on in our brains when deducing context and nuance.

The current trend of adding humans into the loop to increase the user friendliness of online algorithms has more to do with our lack of understanding of human thought than with any technical limitations posed by computers.

Are we all ice skaters?

Gliding through life, blissfully unaware that there is an entirely different world just below the surface.

The times we do break through unexpectedly the shock is so much that it often kills us.

And so we learn to fear thin ice. It is dangerous. It leads to death (some would say the ultimate price).

When we do decide to tap into the world on the other side we carefully control our access using tools to drill a hole through the boundary.

We remain on the surface, in our own element. Comfortable.

We lower more tools through the chasm, to fish out the pieces of that underworld we are interested in, because we understand that some can help sustain our own life, on the surface.

And when we have extracted what we think we need, we leave the opening to seal up.

A distortion and blemish on our surface that skaters learn to avoid, because it can trip them.

And even grow to resent those that broke the boundary as it now creates a more complicated environment for us to navigate smoothly.

Are we sacrificing creativity for content?

I decided to become an engineer, before even knowing what “engineering” was, because of a comment my 4th grade art teacher made regarding an art project. I’m pretty sure she meant it as a complement.

The concept of “<insert form of creative expression here> is <insert sensory-related word here> math” is nothing new. From the mathematics of music, to the use of perspective in visual art, there is no escaping the mathematical nature of the universe. All art, no matter the medium, can be thought of as offering a different view of our underlying reality. A different way of looking at the equations, a way at looking at math without even realizing it’s math.

Then why in the engineering curriculum is the emphasis all on the math? Sure, it’s important. Knowing the math can mean the difference between a bridge that collapses1 and one that is a functional art exhibit. Or the difference between a Mars Climate Orbiter that doesn’t orbit and a Mars rover that far exceeds its planned longevity. But it’s still just one view.

If you have ever tried applying the same layering techniques using water colors that are commonly done with oil paints, or tried to write a formal cover letter in iambic tetrameter, you have first hand experience that the choice of the medium has a large impact on the styles and expressive techniques available to the artist. Likewise, the choice of programming language has a similar affect on the capabilities and limitations of the programmer.

see the code

And on the flip side, anyone who can write a formal cover letter, or who is intrigued by writing one in iambic tetrameter, should learn a programming language or two. It’s yet another form of artistic expression, one that can transform the metamedia of the computer into a rich, expressive statement, or produce an epic failure of both form and function.

Footnotes:

1 Though there is a beauty to the mathematics of this particular failure.

iBreakit, iFixit

This past weekend ended up being the weekend of repairs as two lingering problems increased to a point that they could no longer be ignored:

  1. The drain pipe of my bathroom sink completely detached itself from the sink basin.
  2. The aluminum frame on the display of my laptop began peeling away
    from the LCD panel to such an extend that I was concerned continued
    use could result in cracking the front glass.
this can't be good

this can’t be good

I will spare you, dear reader, the gory details of the fix to the first problem (it involved a trip to the hardware store, some plumber’s putty and an old tooth brush) and instead focus on the latter.

As is usually the case with these things, my trusted laptop had long since left its comfortable status of “covered under warranty” when this issue began, and while some googling revealed that I am not the only one to experience this phenomenon it seemed I wasn’t going to get much loving care from Apple and I was fairly certain they would have made some silly claim that they couldn’t do anything for the clearly mechanical problem because I was running Linux on my machine, instead of OS X (full disclosure, they probably would have been justified saying so in this case: one hypothesis of the cause of this problem is excessive heating of the upper left corner which breaks down the glue holding the aluminum backing to the LCD panel. While a number of non-blasphemous, OS-X users clearly had the same problem, my case certainly isn’t helped by the fact that two of the things that don’t always work out-of-the-box on a new Arch Linux install on a MBP are the fan control software and “sleep on lid close”. As a result, there have been a number of times my laptop has overheated after I pulled it out of my bag to discover it had never gone to sleep when I put it in. Plus, I’ve definitely dropped the thing a number of times, as the dents and scratches indicate. Woops. That all being said (warning: tangent alert), I was told of another experience in which an Apple tech rep thought perhaps there was a virus after seeing a syslinux boot screen pop up1.

It would have cost $60 to have the nice folks at the campus bookstore take a look at it, not including any repair costs. The Apple-sanctioned “fix” for this is a full replacement of the display assembly (which seems silly since there’s really nothing wrong with the display), costing around $400-$600, depending on who you talk to (or apparently $1000 if you’re dealing with Australian dollars). Long story short2, I decided I didn’t have much to lose3, and some substantial costs to be saved if I attempted a DIY fix.

Now, let me be very clear: The fact that I happen to have a degree that says “Computer Systems Engineering” in the title has little to no bearing on my skill set and knowledge base required for this repair. Honestly (and those of you who are currently pursuing a CpE degree, please reassure the non-engineers that this is the truth). I say this because it is important that everyone know they are fully capable of making many of their own repairs to there various pieces of technology4. The topic of technological elitism came up last year in a GEDI course, there is concern that as we integrate more and more technology into our lives we are becoming more and more dependent on those who understand how the technology works. My counter argument to that concern is that while there certainly is more to learn and more skills involved in the service and repair of a computer than say, a pen and paper, there are many excellent resources freely available to anyone who takes the initiative to learn about them. One great resource that I used for this particular repair is ifixit.com, a wiki-based repair manual containing step-by-step guides for everything from replacing the door handle on a toaster oven to various repairs of your smartphone. Since I knew if I had any chance of pulling this off I would need to lay the display flat, the guide I found most relevant to the endeavor at hand was Installing MacBook Pro 15″ Unibody Late 2008 and Early 2009 LCD.

Supplies needed5:

required items

  1. The computer to be repaired
  2. mini screwdriver set
  3. Donut, preferably coconut
  4. Coffee
  5. working computer that can access ifixit.com
  6. 5 minute epoxy
  7. T6 Torex screwdriver
  8. A reasonably heavy, flat object
  9. Stress relief

Step-by-step image gallery

  1. Follow the steps in the ifixit guide to remove the display assembly from the body of the laptop.
  2. Reset donut
  3. attempt to apply epoxy in gap between aluminum backing and display, apply pressure, wait for a couple hours
  4. reassemble laptop, power on and use
  5. determine that epoxy is not holding, either due to age, bad application due to limited access to the surface
  6. powerdown and re-disassemble laptop
  7. Using a heat gun to loosen the remaining adhesive around the display casing, gently pry off the aluminum backing completely
  8. This is a perfect opportunity to “pimp your mac” and add some sort of creative graphic behind the apple logo. All I could find was some engineering paper, which turned out somewhat ho-hum.
  9. attempt to remove old adhesive with acetone and/or mechanical force. give up.
  10. Working quickly, (it is 5 minute epoxy, after all) mix up a fresh batch of epoxy, apply intelligently around edge of display casing, choosing places that look least likely to cause problems if it runs over (e.g. avoid iSight camera housing)
  11. Carefully position aluminum backing back on display casing, press firmly and wipe away excess epoxy.
  12. Apply gentle pressure for 5-10 minutes, let cure for another hour or so before reassembly.

    analog media is still relevant

    analog media is still relevant

  13. Re-assemble.
  14. success!

Footnotes:

1 It does make you wonder which dictionary Apple’s marketing department was using when they came up with the “Genius” title. A more accurate title, with 100% more alliteration, would have been “Apple Automaton” since they do an excellent job when a problem is solvable by means of a pre-supplied checklist). Don’t get me wrong, I think Apple’s tech support is generally pretty good, as are their employees. And they are completely within their right to refuse to offer any service or advise to customers who have opted out of the software/hardware-as-one package they provide. But it doesn’t (shouldn’t) take a genius to determine that a different bootloader from Apple’s default is not a virus.

2too late

3aside from possibly rendering my display useless

4 if you have ever replaced a tire on your car, but freak out at the idea of fixing your own computer, briefly consider the consequences of a botched repair job on both. Statistically you are much more likely to die in a horrible, fiery crash as the result of a bad tire replacement than a botched attempt at re-gluing your laptop screen together. Just something to think about.

5wordpress fail: I could not figure out how to tell wordpress to use letters to “number” this ordered list without changing the style sheet for my theme. It could be user error, but I prefer to blame wordpress.

Creative writing, technically

A number of recent conversations, combined with topics-of-interest in both ECE2524 and VTCLI, followed by a chance encounter with an unfamiliar (to me) blogger’s post have all led me to believe I should write a bit about interface design and various tools available to aid in writing workflow.No matter our field, I’m willing to bet we all do some writing. Our writing workflow has undergon some changes since transitioning to the digital era, most notably for my interests is this quote from the aforementioned blog post:

…prior to the computerized era, writers produced a series complete drafts on the way to publications, complete with erasures, annotations, and so on. These are archival gold, since they illuminate the creative process in a way that often reveals the hidden stories behind the books we care about.

The author then introduces a set of scripts a colleague wrote as the response to a question on how to integrate version control into his writing process. The scripts are essentially a wrapper around git, a popular version control system used by software developers and originally designed to meet the needs of a massively distributed collaborative projects, namely the Linux kernel.

What’s really great about this (aside from the clear awesomeness of a sci-fi author collaborating with a techie blogger/podcaster to create a tool that is useful and usable by writers using tools that that are useful and usable by software developers) is that it brings into clear focus some thoughts I wanted to get out last semester about the benefits of writing in a plain text format.

This gets back to one of the recent conversations that also ties into all of this: I was talking to a friend of mine, another grad student in a STEM field, and we were discussing the unfortunate prevalence of the use of MS Word for scientific papers. I don’t want to get into a long discussion of the demerits of MS Word in general, but suffice it to say, if you are interested in producing a professional quality paper, and enjoy the experience of shooting yourself in both foot followed by running a marathon, then by all means, use MS Word. There are also a number of excuses of questionable validity that people use to defend their MS Word usage in scientific writing. The ones that are often brought up often involve the need to collaborate with other authors who are also using MS Word.

Now run that marathon backwards while juggling flaming torches.

I should point out I don’t want to just pick on MS Word here, the same goes for Apple’s Pages or any large software package that tries to be the solution to all your writing needs. I will hence forth refer to this problematic piece of software generically as a “Word Processor”, capitalized to reinforce the idea that I am indeed referring to a number of specific widely used tools.

The conversation led to user interfaces, and the alleged intuitiveness of a modern Word Processor, compared to simple, yet powerful text editor such as emacs or vim. Out of that, my friend discovered a post on a neuroscience blog about user friendly user interfaces that did a nice job putting into writing thoughts that I had been trying to verbalize during our discussion. Namely that the supposed intuitiveness of a Word Processor to “new” users is largely a factor of familiarity rather than any innate intuitiveness to the interface. Once your learn what the symbols mean and where the numerous menu items are that you need to access then it all seems just dandy. Until they go and change the interface on you.

I could and probably should write an entire post on ALL the benefits of adopting a plain-text workflow, and the benefits of using one text editor that you know well for all your writing needs, from scientific papers, to blog, presentations and emails (how many people ever stop to think why it is acceptable and normal to have to learn a new user interface for each different writing task, even though fundamentally the actual work is all the same?). The key benefit I want to highlight here is the one that made it possible for the collaborative effort I mentioned towards the top to take place. By writing in a plain text format, you immediately have the ability to use the enormous wealth of tools that have been developed throughout the history of computing that work with plain text. If our earlier mentioned hero had been doing his writing in a Word Processor, it would have been nearly impossible for his friend to piece together a tool for him that allows him to regain something that was lost with the transition away from a paper workflow, a tool that can “illuminate the creative process in a way that often reveals the hidden stories”, and in many ways goes beyond what was possible or convenient with the paper workflow.

What tools do you use to track your writing process? Do they allow you to go back to any earlier revision, or allow you to easily discover what recent blog’s you had read, what your mood and what the weather was when you wrote a particular passage? Do you use a tool with an interface that is a constant distraction, or one that is hardly noticeable and lets you focus on what actually matters: the words on the page. If not, then why?