A Comment on “A comment on commenting”

In his post A comment on commenting, leon.pham commented on the annoyance of remembering different commenting syntax in different langauges. It’s true, it is a lot to keep track of. Luckily, if you use a good text editor, such as emacs or vim you can offload the task of remembering what syntax to use to the editor itself. For instance emacs has two commands to aid in creating comments: one to block off a highlighted region in comments, and another to add an end of line comment. Once you learn the command for each (adding an end-of-line comment defaults to M-; in emacs, (where M is the “meta” key, or “Alt” on most keyboars) but of course you could map it to anything you want), that’s it. The editor is generally is smart enough to know what language you are current writing in (and of course you can override it when you need to), and so the universal “add a comment” command that you learn once will always add a comment in the proper syntax for the language you are currently editing! Just another motivation to learn one editor and learn it well!

I will leave it as an exercise for the vim-using reader to post information for the equivilant command in vim!

Blasphemy?: DRMed Games on Linux

The interwebz have been all atwitter the past month or so with Valve’s announcement of the porting of their Steam gaming service to the
GNU/Linux platform. Many Linux users were thrilled about the
announcement and saw it as a sign that Linux was breaking out of the
small niche culture of hackers to more mainstream folk who just want
to turn there computer on and play a game. To be fair, Linux is not
without a large number of free (both as in beer and as in speech)
games already, but the announcement of a major (are they? I actually
only heard of Valve and Steam because of the Linux announcement)
gaming company moving to the platform as seen by some as legitimizing
the OS to the masses. It certainly gives everyone something to talk
about.

I consider myself more of a pragmatic when it comes to the
philosophical debate surrounding free (for those familiar, the debate
mostly deals with libre software. English has many deficiencies, one
of which is the multiple meanings of the word “free”. In general free
software supporters do support the idea of paying for software and
believe that people should be able to make money off of the software
they write). I think free software is a great ideal to strive for, and
certainly for mission critical software I believe it is important to
have the freedom to view and modify the source code. As I brought up
in vtcli earlier this semester, as an example it is important to have
the freedom to confirm that the incognito mode of your web browser
really is doing what it says it is and not storing or sharing your
browsing information (as an aside to that, I erroneously claimed that
Chrome was open source, it is not, however, it theoretically uses the
same code-base as Chromium, which is open source, and happens to be
the browser I use both when in Linux and OS X. I highly encourage any
users of Chrome to switch to Chromium for the open sourced goodness it
provides, including the ability to confirm that incognito mode really
is incognito). That being said, if there’s a great game I like I am
not terribly concerned with not being able to look at or distribute
the source code, though I certainly would encourage game developers to
release their code under one of the many open source licenses.

It is interesting to note that free software evangelist Richard
Stallman himself isn’t ALL doom and gloom about the news. Though he
certainly isn’t thrilled and encourages people to try out any of the
free games that are available, he does see the move as a possible
motivator for some people to ditch their non-free OSes completely if
gaming had been the only thing holding them back.

However, if you’re going to use these games, you’re better off using them on GNU/Linux rather than on Microsoft Windows. At least you avoid the harm to your freedom that Windows would do. – Richard Stallman

I installed Steam on my Arch Linux install last week and so far have
tried out Bastion, Splice and The World of Goo. All work very well
and have been fun (I had played World of Goo before both on OS X and
Android, it is fun on any platform!). Offically, Arch Linux isn’t
supported but after adding a couple of the libraries and font packages
mentioned on the wiki everything worked like a charm. One down side
that Stallman failed to mention in his response was the fact that it
is much easier for me to spend money on games now that I don’t need to
switch over to OS X to run them.

Git Games and Meta Moments

I had a bit of a meta moment while swimming today. I have a lot of
good moments while swimming, probably because it’s a chance for my
mind to wander. That’s probably a good argument to go more often than
I did this past week (1 out of 6 possible practice days!). But I
digress.

Yesterday I was introduced to a game of sorts to help learn some
concepts used by git. For those of you who don’t know, git is a
versioning control system that has gained quit a bit of popularity
over the past few years, especially in the open source community. I
had been using it myself for my own projects, but mainly at a very
simplistic level.

At one level, a versioning control system (VCS), of which git is
one of many, is a tool to facilitate documenting the changes
of… well, a document. Historically these systems were develped by
software designers to both document changes and provide an easy path
to revert to older versions of source code. Later, similar concepts
were implemented in modern word processors (with limited scope and
power due to the restrictive nature, essentially the traditional
method of tracking edits from the pen and paper days was ported over
to the electronic medium without much change).

One thing that became much more clear to me after trying out the git
game was that while providing logical “snapshots” of a project that
can be used as a return point if somethign goes astray in the future,
git is creating a history of the project, a history that tells a
story. But unlike other histories you may be familiar with, the
history generated by git can be rewritten to change the past.

What had alluded me up until this point was what motivation one might
have to rewrite history. I figured, you make changes, commit them to
the project, those changes get recorded, what more would you need?
Well, it turns out that with the ability to rewrite history, git makes
it incredibly easy to do certain types of edits on your data and
allows an author to use git more as a tool for trying out new,
possibly risky ideas, or take off on a tangent while always providing
a clear path back to a ground point.

The details of what these types of edits are are important, but after
I began writing them up I realized I was losing sight of my original
reason for writing this post! Luckily, I have been using git to track
changes to this document, and created a branch for each of the
examples I thought would be useful. I’m going to leave them out of
the final document for now, but they exist in my history, and since I
will post this project to github, you are free to take a look!

What I thought about while swimming, after the git game helped me
understand why rewriting history could be so useful, and how the
history itself could be used, was that since I’m using git to manage
the files for ECE2524, I could also use it guide future semesters of
the course. Every time I add a new set of lecture notes, or add a new
assignment, I make a commit to a git repo containing all the files
I’ve used so far. That is also recording the order in which topics
are introduced to the class, so I’m generating an outline for the
semester just by nature of using git for your regular garden variety
versioning control.

But I had an hour and a half to occupy my brain while I swam back and
forth, so the wheels kept turning. We use git for class, as those of
you in it know, because it is an important tool for software
development and happens to be a particularly Unix-y tool to boot. The
Unix-i-ness of git is something I will leave for discussion in class
tomorrow (oh, the suspense!). We use git, but it is a complicated
tool to learn, even though what it is doing is quite simple, once you
grok it, something that doesn’t always happen quickly, and never as
quickly as you would like.

But the information tracking ideas from git can be related to the
process we go through in class, which can also be related to the
discussion on working memory vs. long-term memory we had in vtlci. The
process of learning new things involves some experimentation and a lot
of data filtering. We have a lot of information available to us, the
culmination of which can be thought of as the contents of our “working
directory” in git terms. As we individually work through the
information and inspect it through our own lens we commit pieces of it
to our memory, our repository. Though we’re not actively doing it,
there is a log associated with this process. It is not as precise as
something stored on a computer, of course, but looking back on the
past few days we can recall things like “concept X made a lot more
sense to me after I understood hypothesis Y, which became clear after
working through exercise Z.”

What if we were more conscious of this process in class, and even made
an effort to map it more directly to the concept of using git? For
instance, one versioning control concept we’ll start to explore
tomorrow is branching and merging.

A branch can be thought of as a temporary deviation away from the main
story line. In fact, in my first paragraph I went off on a bit of a
tangent about my tendency to let my mind wander while swimming. That
could be thought of as a branch away from the main topic, which (I
promise) is about using git to map the journey we take to learn to use
git. In fact, I switched to a new branch in git when I began writing
those sentences, and then merged them back into the main conversation
when I was done.

What if the class were split up into groups, and each group worked on
one aspect of learning to use git’s branch and merge functionality.
For instances, Group 1 might play the git game, while group 2 might
read about how git represents and references data and what is going on
under the hood. At that point, the collective commit knowledge of the
class will have split into two branches. One branch with more of a
pragmatic grasp of “this is how I do a branch and merge” and another
branch with a better understanding of “this is how a branch and a
merge is implemented by git”. Then, the following week, both groups
would come back together and share what each learned. The two groups
will have just “merged” their knowledge and everyone should have a
better understanding of how to conduct a branch and a merge, and also
what is going on with the underlying data structure when they do one.

Oh, and by the way, when writing that last paragraph I created two new
branches: one named “group1” and it contained the description of what
the hypothetical group1 would do, and another called “group2” which
contained the sentance describing that group’s task. Then I merged the
two back into the master branch, reformated the paragraph and add a
summary. Check out this history on github!

So this whole process got me thinking. Does thinkging about meta
thoughts make it easier or more likely to think about meta thoughts in
the future? And likewise, does it make it easier to draw comparisons
between seemingly unrelated processes, such as learning new ideas, and
software development, when you have a process and a vocabulary to
describe the process of each? I am a strange loop.

It’s a Feature, not a Bug

In his article The internet: Everything you ever need to know, John Naughton lists nine key concepts about the Internet to help us understand that profound impact it is having, and will continue to have, on our lives.  Reading number 3 “DISRUPTION IS A FEATURE, NOT A BUG” I found myself drawing parallels to the design of the Internet and the design of the Unix operating system. The similarities are no accident, as the history of Unix and the Internet became closely intertwined after DARPA’s 1980 decisions that the BSD Unix team would implement the brand new TCP/IP stack which controls how data packets are routed between machines on the Internet.

Continue reading