Creative writing, technically

A number of recent conversations, combined with topics-of-interest in both ECE2524 and VTCLI, followed by a chance encounter with an unfamiliar (to me) blogger’s post have all led me to believe I should write a bit about interface design and various tools available to aid in writing workflow.No matter our field, I’m willing to bet we all do some writing. Our writing workflow has undergon some changes since transitioning to the digital era, most notably for my interests is this quote from the aforementioned blog post:

…prior to the computerized era, writers produced a series complete drafts on the way to publications, complete with erasures, annotations, and so on. These are archival gold, since they illuminate the creative process in a way that often reveals the hidden stories behind the books we care about.

The author then introduces a set of scripts a colleague wrote as the response to a question on how to integrate version control into his writing process. The scripts are essentially a wrapper around git, a popular version control system used by software developers and originally designed to meet the needs of a massively distributed collaborative projects, namely the Linux kernel.

What’s really great about this (aside from the clear awesomeness of a sci-fi author collaborating with a techie blogger/podcaster to create a tool that is useful and usable by writers using tools that that are useful and usable by software developers) is that it brings into clear focus some thoughts I wanted to get out last semester about the benefits of writing in a plain text format.

This gets back to one of the recent conversations that also ties into all of this: I was talking to a friend of mine, another grad student in a STEM field, and we were discussing the unfortunate prevalence of the use of MS Word for scientific papers. I don’t want to get into a long discussion of the demerits of MS Word in general, but suffice it to say, if you are interested in producing a professional quality paper, and enjoy the experience of shooting yourself in both foot followed by running a marathon, then by all means, use MS Word. There are also a number of excuses of questionable validity that people use to defend their MS Word usage in scientific writing. The ones that are often brought up often involve the need to collaborate with other authors who are also using MS Word.

Now run that marathon backwards while juggling flaming torches.

I should point out I don’t want to just pick on MS Word here, the same goes for Apple’s Pages or any large software package that tries to be the solution to all your writing needs. I will hence forth refer to this problematic piece of software generically as a “Word Processor”, capitalized to reinforce the idea that I am indeed referring to a number of specific widely used tools.

The conversation led to user interfaces, and the alleged intuitiveness of a modern Word Processor, compared to simple, yet powerful text editor such as emacs or vim. Out of that, my friend discovered a post on a neuroscience blog about user friendly user interfaces that did a nice job putting into writing thoughts that I had been trying to verbalize during our discussion. Namely that the supposed intuitiveness of a Word Processor to “new” users is largely a factor of familiarity rather than any innate intuitiveness to the interface. Once your learn what the symbols mean and where the numerous menu items are that you need to access then it all seems just dandy. Until they go and change the interface on you.

I could and probably should write an entire post on ALL the benefits of adopting a plain-text workflow, and the benefits of using one text editor that you know well for all your writing needs, from scientific papers, to blog, presentations and emails (how many people ever stop to think why it is acceptable and normal to have to learn a new user interface for each different writing task, even though fundamentally the actual work is all the same?). The key benefit I want to highlight here is the one that made it possible for the collaborative effort I mentioned towards the top to take place. By writing in a plain text format, you immediately have the ability to use the enormous wealth of tools that have been developed throughout the history of computing that work with plain text. If our earlier mentioned hero had been doing his writing in a Word Processor, it would have been nearly impossible for his friend to piece together a tool for him that allows him to regain something that was lost with the transition away from a paper workflow, a tool that can “illuminate the creative process in a way that often reveals the hidden stories”, and in many ways goes beyond what was possible or convenient with the paper workflow.

What tools do you use to track your writing process? Do they allow you to go back to any earlier revision, or allow you to easily discover what recent blog’s you had read, what your mood and what the weather was when you wrote a particular passage? Do you use a tool with an interface that is a constant distraction, or one that is hardly noticeable and lets you focus on what actually matters: the words on the page. If not, then why?

I am a Selfish Git: A bit on my teaching philosophy

A common observation I encounter from people who have taken my class is that there is less structure in the assignments than they are used to, and oftentimes less than they would like. A consequence of this is that participants do a lot of searching the web for tidbits on syntax and program idioms for the language du jour, a process that can take time with the wealth of information that is returned by a simple google search. I could track down some research that shows the benefit of this “look it up yourself” approach, and it would all be valid and it is one of the reasons I structure assignments the way I do, but there is another reason. A more selfish reason.

Throughout the term I’ll assign a series of assignments. Details are tweaked each semester but the general outline is something like:

  • read in lines of numbers, one per line, do something with them and write out a result number.
  • read in lines of structured data, do something with them, write out lines of structured data
  • spawn a child process, or two, connect them with a pipe (this year I will probably integrate the “read in lines” idiom into this assignment since I like it so much)

I’ve done each of these myself of course, and tweaked my own solutions from year to year and have found a structure for each that I think works well, is easy to read and is as simple as possible. Often times my solutions use fewer lines of code than some of the solutions I receive, which admittedly make my estimates of how long a particular assignment will be inaccurate. I know some of the assignments end up taking a lot longer than I anticipate for some, and this can be
extremely frustrating, especially since I know everyone’s time is a precious commodity that must be partitioned across other classes and personal time too (you are making time for play, aren’t you?).

I could provide more details in the write-ups. I could say “I tried algorithm X a number of ways: A, B and C, and settled on B because P, Q and R”. It would save those completing the assignments time and it would save me time, because on average the results I’d get back for grading would take up fewer lines of code and be more familiar to me. And that is why I don’t.

If I wrote in the assignment and said “for part A, use method B in conjunction with idiom X and you can complete this part in 3 lines” then I can guarantee you that around 99% of the 60 assignments I received back used method B in conjunction with idiom X in only 3 lines of code. It would be much easier to evaluate: I’d be familiar with the most common errors when using method B in conjunction with idiom X and would have made spotting them quickly a reflexive response.

But I wouldn’t learn a thing.

Let me tell you a secret. Sure, I enjoy seeing others learn and explore new ideas and get excited when they discover they can write something in 10 lines in Python that took them 30 in C. I really do. But that’s not the only reason I teach. I teach because I learn a tremendous amount from the process myself. In fact, all that tweaking I said I’ve done to my solutions? That was done in response to reviewing a diverse (sometimes very diverse) set of solutions to the same problem. Often times I’ll get solutions written in a way I would never have used solving the problem myself, and my first reaction is something like “why does this even work?” And then I’ll look at it a little closer (often times using a fair amount of googling myself to find other similar examples) until I understand the implementation and form some opinion about it. There are plenty of times that I’ll get a solution handed to me that I think is cleaner, more elegant and simpler than my own, and so I’ll incorporate what I learned into my future solutions (and let’s not forget back into my own work as well, a topic for another post). And I’ll learn something new. And that makes me happy.

I really like learning new things (thank goodness for that, given how long I’ve been in school!), and I have learned so much over the past couple years that I’ve been teaching. Possibly more than what I’ve learned in all the classes I’ve taken during my graduate career (different things for sure, which makes it difficult to compare amount, but still, you get the idea).

To be sure, there is a balance, and part of my own learning process has been to find out that sweat spot between unstructured free-style assignments (“Write a program that does X with input B. Ready go!”) and an enumerated list of steps that will gently guide a traveler from an empty text file to a working program that meets all the specs. I think I’ve been zeroing in on the balance, and the feedback I get from blogs as well as the assignments themselves is really helpful.

So keep writing, and keep a healthy does of skepticism regarding my philosophy. And ask questions!

A Comment on “A comment on commenting”

In his post A comment on commenting, leon.pham commented on the annoyance of remembering different commenting syntax in different langauges. It’s true, it is a lot to keep track of. Luckily, if you use a good text editor, such as emacs or vim you can offload the task of remembering what syntax to use to the editor itself. For instance emacs has two commands to aid in creating comments: one to block off a highlighted region in comments, and another to add an end of line comment. Once you learn the command for each (adding an end-of-line comment defaults to M-; in emacs, (where M is the “meta” key, or “Alt” on most keyboars) but of course you could map it to anything you want), that’s it. The editor is generally is smart enough to know what language you are current writing in (and of course you can override it when you need to), and so the universal “add a comment” command that you learn once will always add a comment in the proper syntax for the language you are currently editing! Just another motivation to learn one editor and learn it well!

I will leave it as an exercise for the vim-using reader to post information for the equivilant command in vim!

Blasphemy?: DRMed Games on Linux

The interwebz have been all atwitter the past month or so with Valve’s announcement of the porting of their Steam gaming service to the
GNU/Linux platform. Many Linux users were thrilled about the
announcement and saw it as a sign that Linux was breaking out of the
small niche culture of hackers to more mainstream folk who just want
to turn there computer on and play a game. To be fair, Linux is not
without a large number of free (both as in beer and as in speech)
games already, but the announcement of a major (are they? I actually
only heard of Valve and Steam because of the Linux announcement)
gaming company moving to the platform as seen by some as legitimizing
the OS to the masses. It certainly gives everyone something to talk
about.

I consider myself more of a pragmatic when it comes to the
philosophical debate surrounding free (for those familiar, the debate
mostly deals with libre software. English has many deficiencies, one
of which is the multiple meanings of the word “free”. In general free
software supporters do support the idea of paying for software and
believe that people should be able to make money off of the software
they write). I think free software is a great ideal to strive for, and
certainly for mission critical software I believe it is important to
have the freedom to view and modify the source code. As I brought up
in vtcli earlier this semester, as an example it is important to have
the freedom to confirm that the incognito mode of your web browser
really is doing what it says it is and not storing or sharing your
browsing information (as an aside to that, I erroneously claimed that
Chrome was open source, it is not, however, it theoretically uses the
same code-base as Chromium, which is open source, and happens to be
the browser I use both when in Linux and OS X. I highly encourage any
users of Chrome to switch to Chromium for the open sourced goodness it
provides, including the ability to confirm that incognito mode really
is incognito). That being said, if there’s a great game I like I am
not terribly concerned with not being able to look at or distribute
the source code, though I certainly would encourage game developers to
release their code under one of the many open source licenses.

It is interesting to note that free software evangelist Richard
Stallman himself isn’t ALL doom and gloom about the news. Though he
certainly isn’t thrilled and encourages people to try out any of the
free games that are available, he does see the move as a possible
motivator for some people to ditch their non-free OSes completely if
gaming had been the only thing holding them back.

However, if you’re going to use these games, you’re better off using them on GNU/Linux rather than on Microsoft Windows. At least you avoid the harm to your freedom that Windows would do. – Richard Stallman

I installed Steam on my Arch Linux install last week and so far have
tried out Bastion, Splice and The World of Goo. All work very well
and have been fun (I had played World of Goo before both on OS X and
Android, it is fun on any platform!). Offically, Arch Linux isn’t
supported but after adding a couple of the libraries and font packages
mentioned on the wiki everything worked like a charm. One down side
that Stallman failed to mention in his response was the fact that it
is much easier for me to spend money on games now that I don’t need to
switch over to OS X to run them.

Git Games and Meta Moments

I had a bit of a meta moment while swimming today. I have a lot of
good moments while swimming, probably because it’s a chance for my
mind to wander. That’s probably a good argument to go more often than
I did this past week (1 out of 6 possible practice days!). But I
digress.

Yesterday I was introduced to a game of sorts to help learn some
concepts used by git. For those of you who don’t know, git is a
versioning control system that has gained quit a bit of popularity
over the past few years, especially in the open source community. I
had been using it myself for my own projects, but mainly at a very
simplistic level.

At one level, a versioning control system (VCS), of which git is
one of many, is a tool to facilitate documenting the changes
of… well, a document. Historically these systems were develped by
software designers to both document changes and provide an easy path
to revert to older versions of source code. Later, similar concepts
were implemented in modern word processors (with limited scope and
power due to the restrictive nature, essentially the traditional
method of tracking edits from the pen and paper days was ported over
to the electronic medium without much change).

One thing that became much more clear to me after trying out the git
game was that while providing logical “snapshots” of a project that
can be used as a return point if somethign goes astray in the future,
git is creating a history of the project, a history that tells a
story. But unlike other histories you may be familiar with, the
history generated by git can be rewritten to change the past.

What had alluded me up until this point was what motivation one might
have to rewrite history. I figured, you make changes, commit them to
the project, those changes get recorded, what more would you need?
Well, it turns out that with the ability to rewrite history, git makes
it incredibly easy to do certain types of edits on your data and
allows an author to use git more as a tool for trying out new,
possibly risky ideas, or take off on a tangent while always providing
a clear path back to a ground point.

The details of what these types of edits are are important, but after
I began writing them up I realized I was losing sight of my original
reason for writing this post! Luckily, I have been using git to track
changes to this document, and created a branch for each of the
examples I thought would be useful. I’m going to leave them out of
the final document for now, but they exist in my history, and since I
will post this project to github, you are free to take a look!

What I thought about while swimming, after the git game helped me
understand why rewriting history could be so useful, and how the
history itself could be used, was that since I’m using git to manage
the files for ECE2524, I could also use it guide future semesters of
the course. Every time I add a new set of lecture notes, or add a new
assignment, I make a commit to a git repo containing all the files
I’ve used so far. That is also recording the order in which topics
are introduced to the class, so I’m generating an outline for the
semester just by nature of using git for your regular garden variety
versioning control.

But I had an hour and a half to occupy my brain while I swam back and
forth, so the wheels kept turning. We use git for class, as those of
you in it know, because it is an important tool for software
development and happens to be a particularly Unix-y tool to boot. The
Unix-i-ness of git is something I will leave for discussion in class
tomorrow (oh, the suspense!). We use git, but it is a complicated
tool to learn, even though what it is doing is quite simple, once you
grok it, something that doesn’t always happen quickly, and never as
quickly as you would like.

But the information tracking ideas from git can be related to the
process we go through in class, which can also be related to the
discussion on working memory vs. long-term memory we had in vtlci. The
process of learning new things involves some experimentation and a lot
of data filtering. We have a lot of information available to us, the
culmination of which can be thought of as the contents of our “working
directory” in git terms. As we individually work through the
information and inspect it through our own lens we commit pieces of it
to our memory, our repository. Though we’re not actively doing it,
there is a log associated with this process. It is not as precise as
something stored on a computer, of course, but looking back on the
past few days we can recall things like “concept X made a lot more
sense to me after I understood hypothesis Y, which became clear after
working through exercise Z.”

What if we were more conscious of this process in class, and even made
an effort to map it more directly to the concept of using git? For
instance, one versioning control concept we’ll start to explore
tomorrow is branching and merging.

A branch can be thought of as a temporary deviation away from the main
story line. In fact, in my first paragraph I went off on a bit of a
tangent about my tendency to let my mind wander while swimming. That
could be thought of as a branch away from the main topic, which (I
promise) is about using git to map the journey we take to learn to use
git. In fact, I switched to a new branch in git when I began writing
those sentences, and then merged them back into the main conversation
when I was done.

What if the class were split up into groups, and each group worked on
one aspect of learning to use git’s branch and merge functionality.
For instances, Group 1 might play the git game, while group 2 might
read about how git represents and references data and what is going on
under the hood. At that point, the collective commit knowledge of the
class will have split into two branches. One branch with more of a
pragmatic grasp of “this is how I do a branch and merge” and another
branch with a better understanding of “this is how a branch and a
merge is implemented by git”. Then, the following week, both groups
would come back together and share what each learned. The two groups
will have just “merged” their knowledge and everyone should have a
better understanding of how to conduct a branch and a merge, and also
what is going on with the underlying data structure when they do one.

Oh, and by the way, when writing that last paragraph I created two new
branches: one named “group1” and it contained the description of what
the hypothetical group1 would do, and another called “group2” which
contained the sentance describing that group’s task. Then I merged the
two back into the master branch, reformated the paragraph and add a
summary. Check out this history on github!

So this whole process got me thinking. Does thinkging about meta
thoughts make it easier or more likely to think about meta thoughts in
the future? And likewise, does it make it easier to draw comparisons
between seemingly unrelated processes, such as learning new ideas, and
software development, when you have a process and a vocabulary to
describe the process of each? I am a strange loop.

Does awareness of our limitations aid in overcoming them?

A couple of thoughts have been bouncing around in my head while reading. First, while reading As We May Think by Bush, but repeatedly with other sources, I was reminded of a thought I often have when reading science ficture written in the 50s, around the same time Vannevar Bush wrote As We May Think. While on many levels the predictions of the future turned out to be quite accurate, there are notable exceptions that jump out at me while reading. A really good example that illustrates my point is Isaac Asimov’s Foundation series. The series is somewhat unique in that it covers a huge expanse of time in the fictional world, over 20,000 years if all the short stories and novels written by other authors after Asimov’s death are taken into account, and was written over several decades in standard non-fictional Earth time: the four stories that made up the first published book in the series were written between 1942 and 1944. Asimov thought he was done with the series after writing two stories in 1948 and 1949 and went on to do other things for 30 years. After much continued pressure from fans and friends he published the 6th book in the series in 1982 and the 7th in 1986.

Three things struck me while reading the first part of the series, written in the 40s and 50s:

  • It was generally assumed that nuclear power was the energy of the future. The logical extrapolation was nuclear powered wrist-watches (ok, actually, I did read a compelling article fairly recently revisiting micro-atomic generators using minuscule amounts of radioactive materials to agitate a pizo-electric element to produce electricity, so maybe this wasn’t so far off the mark)
  • While we would have space ships capable of faster-than-light travel (hyperspace!), the calculations to perform jumps and ensure that the trajectory didn’t travel too near the gravitational effects of a star were done by a human, by hand. Particularly long jumps took the better part of a day to calculate and verify before it was deemed safe to tell the ship to execute the maneuver which itself would only take a fraction of a second.
  • There were no women whatsoever in any type of leadership role. We could say the same of ethnic minorities, non-heterosexual and non-cisgendered people as well, but we will give Asimov the benefit of the doubt and acknowledge that the U.S. was (at least visibly) much less diverse than it is today. But surely he knew about the existence of women.

These are little things you get used to when reading science fiction of the time. I think perhaps most interesting is that while it is common to extrapolate technology into the future with reasonably accuracy, the social structures that will exist 10,000 years from now are remarkably similar to those of the current time, if science fiction authors have anything to say about it.

As I mentioned, the 6th book, Foundation’s Edge was published in 1982. Within the first page or so it was revealed without fanfare that the mayor of Terminus, politically the (quasi) central planet of The Foundation (despite it being on the outskirts of the colonized worlds), is currently a woman. Also, due to much research and development the latest spaceships have a new feature: hyperjumps are calculated in a matter of seconds by on-board computers. Also the old nuclear technology has been replaced by state-of-the-art zero-point-energy extraction (if I recall correctly, it’s been a while since I read the books!) providing a nearly unexhaustable energy source to power your jaunts around the universe.

The changes, while artfully worked into the narrative and coherently worked into the fictional universe that had first been described over 30 years prior, still jumped out at the casual reader. I bring this up by no means to diminish Asimov’s work, or him personally (I’m a huge fan, having read and enjoyed just about every book he’s written at this point), but rather to suggest that we has a species have some fundamental limitations in regards to predicting the future. We view the future through a lens designed by history and crafted in the present. While it is all too natural for us to extrapolate existing technology and social dynamics arbitrarily far into the future, and while that leads to some really fascinating scenarios, making significant conceptual leaps (such as the one Ada Lovelace is attributed to making) is something much more difficult and happens much less frequently.

What I wonder though, is after a long history of learning from our shortsightedness in some instances (and acknowledging our forsightedness in others), can we overcome this limitation? Are we now, compared to the 1950s, better able to make conceptual leaps and imagine technology and social structures that are fundamentally different from those of the present simply because we are aware that we tend to make certain kinds of assumptions? Why would a woman even WANT to be mayor of a politically powerful planet?

On Farming, the Internet and Funny Hats

This is a picture of me wearing a hat I made:

A “Scott Pilgrim” hat I made.

It was made from the same pattern used to make the hat used in the movie Scott Pilgrim vs. The World: The woman who did the work of adapting the hat drawn in the comic to something that could be made for a movie made her pattern available (for a small fee) on ravelry.com, a social network for knitters and crocheters.

I’m writing this post right after finishing a dinner which included mushroom leek risotto which I made while reading (risotto the real way involves a lot of stirring and pour in broth a little at a time) Bringing it to the Table by Wendell Berry. The book is a collection of essays Berry wrote over several decades on the topic of farming and food (Not entirely incidentally, Wendell Berry caused a stir and inadvertently started a flame war after writing his essay “Why I am Not Going to Buy a Computer” back in 1987). I ate my risotto out of a bowl that was hand made, though I don’t know by whom, that I picked out at the Empty Bowls charity event I attended on campus last semester. Along with the risotto I had some lentil soup (which I’m sorry to say only came from the organic section of Food Lion) served in a bowl that was hand made by a friend.

In his 1986 essay “A Defense of the Family Farm”, Berry says

As Gill says, “every man is called to give love to the work of his hands. Every man is called to be an artist.” The small family farm is one of the last places – they are getting rarer every day – where men and women (and girls and boys, too) can answer that call to be an artist, to learn to give love to the work of their hands. It is one of the last places where the maker – and some farmers still do talk about “making the crops” – is responsible, from start to finish, for the thing made. This certainly is a spiritual value, but it is not for that reason an impractical or uneconomic one.

People like to make things. We feel a deeper sense of connection to others when we use tools and wear clothing made by someone’s hands. In this essay Berry is cautioning against losing this rich tradition embodied in the family farm to the industrial agriculture complex. Now, in 2013, it is sad to say is cautionary foresight was well placed. Especially in the United States, and increasingly elsewhere as our “efficient” agricultural methods spread, we have become a society that is nearly thoroughly disconnected in all the ways that matter from the one thing that our very survival depends on: our food.

In his essay “As We May Think”, Bush asked “What are the scientists to
do next”. After the end of a scientific enlightenment of sorts, brought on by the War he asked if we could turn the tremendous scientific energy towards something more constructive. One of the many results of the technological advancements made during the war was a radical transformation in the way we grow (and subsequently think about) our food.

It had been know for some time that plants needed at least nitrogen, phosphorous and potassium (N-P-K) to grow (it turns out to grow well they need much more, but at the time, we were patting ourselves on the back for unlocking the mysteries of plant life). Once the war ended there was an abundance of nitrogen (a component of TNT) that needed to be put to good use. The need was so great that it was made available to farmers (in the form of ammonia) for cheap, so cheap that it made economic sense to switch to this commercial product instead of continue with the tried and true method of spreading manure.

Along with this change came others. Because synthetic fertilizers could be produced and transported and spread in large quantities, and due to changes in the Farm Bill to promote food security farm sizes grew and crop diversity shrank. With less diversity less skill was needed and the number of family farms in the U.S. dropped dramatically, from around 6 million immediately after WWII to just over 2 million in the early 1990s. Earlier in the same essay Berry writes

With industrialization has come a general deprication of work. As the price of work has gone up, the value of it has gone down, until it is now so depressed that people simply do not want to do it anymore. We can say without exaggeration that the present national ambition of the United States is unemployment.

This was 1987, remember. Our current job crisis is certainly more complicated than the loss of family farms, but with the destruction of 4 million family farms came the loss of at least twice that many skilled full-time jobs.

All in the name of industrial efficiency.

What’s interesting though is like Berry said, we like making things with our own hands. And we know we like making things with our own hands, we just haven’t had much reason to after industrialization was purported as a solution to all the drudgery involved in actually practicing a skilled craft.

But like me and my hat, eating home-cooked food out of hand-made bowls, food made with ingredients purchased directly from farmers, we haven’t yet completely lost all our skills, they’ve just become hidden. Something we practice in the privacy of our own home.

I am cautiously optimistic that yet another layer of technology may in many ways help us build a stronger craft-based economy. Sites like Etsy have given artisans and people wanting to buy artisanal products a means to connect directly, without going through a middleman, eliminating an undesirable layer of indirection between the products we use and the people who made them.

Can the Internet help us reconnect with what we truly value: each other?