About lilengineerthatcould

Nth year Ph.D. student in Electrical Engineering officially studying Control Systems but in reality doing more software simulation than anything else, and thinks a lot about education system and how to make it better.

I am a Selfish Git: A bit on my teaching philosophy

A common observation I encounter from people who have taken my class is that there is less structure in the assignments than they are used to, and oftentimes less than they would like. A consequence of this is that participants do a lot of searching the web for tidbits on syntax and program idioms for the language du jour, a process that can take time with the wealth of information that is returned by a simple google search. I could track down some research that shows the benefit of this “look it up yourself” approach, and it would all be valid and it is one of the reasons I structure assignments the way I do, but there is another reason. A more selfish reason.

Throughout the term I’ll assign a series of assignments. Details are tweaked each semester but the general outline is something like:

  • read in lines of numbers, one per line, do something with them and write out a result number.
  • read in lines of structured data, do something with them, write out lines of structured data
  • spawn a child process, or two, connect them with a pipe (this year I will probably integrate the “read in lines” idiom into this assignment since I like it so much)

I’ve done each of these myself of course, and tweaked my own solutions from year to year and have found a structure for each that I think works well, is easy to read and is as simple as possible. Often times my solutions use fewer lines of code than some of the solutions I receive, which admittedly make my estimates of how long a particular assignment will be inaccurate. I know some of the assignments end up taking a lot longer than I anticipate for some, and this can be
extremely frustrating, especially since I know everyone’s time is a precious commodity that must be partitioned across other classes and personal time too (you are making time for play, aren’t you?).

I could provide more details in the write-ups. I could say “I tried algorithm X a number of ways: A, B and C, and settled on B because P, Q and R”. It would save those completing the assignments time and it would save me time, because on average the results I’d get back for grading would take up fewer lines of code and be more familiar to me. And that is why I don’t.

If I wrote in the assignment and said “for part A, use method B in conjunction with idiom X and you can complete this part in 3 lines” then I can guarantee you that around 99% of the 60 assignments I received back used method B in conjunction with idiom X in only 3 lines of code. It would be much easier to evaluate: I’d be familiar with the most common errors when using method B in conjunction with idiom X and would have made spotting them quickly a reflexive response.

But I wouldn’t learn a thing.

Let me tell you a secret. Sure, I enjoy seeing others learn and explore new ideas and get excited when they discover they can write something in 10 lines in Python that took them 30 in C. I really do. But that’s not the only reason I teach. I teach because I learn a tremendous amount from the process myself. In fact, all that tweaking I said I’ve done to my solutions? That was done in response to reviewing a diverse (sometimes very diverse) set of solutions to the same problem. Often times I’ll get solutions written in a way I would never have used solving the problem myself, and my first reaction is something like “why does this even work?” And then I’ll look at it a little closer (often times using a fair amount of googling myself to find other similar examples) until I understand the implementation and form some opinion about it. There are plenty of times that I’ll get a solution handed to me that I think is cleaner, more elegant and simpler than my own, and so I’ll incorporate what I learned into my future solutions (and let’s not forget back into my own work as well, a topic for another post). And I’ll learn something new. And that makes me happy.

I really like learning new things (thank goodness for that, given how long I’ve been in school!), and I have learned so much over the past couple years that I’ve been teaching. Possibly more than what I’ve learned in all the classes I’ve taken during my graduate career (different things for sure, which makes it difficult to compare amount, but still, you get the idea).

To be sure, there is a balance, and part of my own learning process has been to find out that sweat spot between unstructured free-style assignments (“Write a program that does X with input B. Ready go!”) and an enumerated list of steps that will gently guide a traveler from an empty text file to a working program that meets all the specs. I think I’ve been zeroing in on the balance, and the feedback I get from blogs as well as the assignments themselves is really helpful.

So keep writing, and keep a healthy does of skepticism regarding my philosophy. And ask questions!

A Comment on “A comment on commenting”

In his post A comment on commenting, leon.pham commented on the annoyance of remembering different commenting syntax in different langauges. It’s true, it is a lot to keep track of. Luckily, if you use a good text editor, such as emacs or vim you can offload the task of remembering what syntax to use to the editor itself. For instance emacs has two commands to aid in creating comments: one to block off a highlighted region in comments, and another to add an end of line comment. Once you learn the command for each (adding an end-of-line comment defaults to M-; in emacs, (where M is the “meta” key, or “Alt” on most keyboars) but of course you could map it to anything you want), that’s it. The editor is generally is smart enough to know what language you are current writing in (and of course you can override it when you need to), and so the universal “add a comment” command that you learn once will always add a comment in the proper syntax for the language you are currently editing! Just another motivation to learn one editor and learn it well!

I will leave it as an exercise for the vim-using reader to post information for the equivilant command in vim!

Blasphemy?: DRMed Games on Linux

The interwebz have been all atwitter the past month or so with Valve’s announcement of the porting of their Steam gaming service to the
GNU/Linux platform. Many Linux users were thrilled about the
announcement and saw it as a sign that Linux was breaking out of the
small niche culture of hackers to more mainstream folk who just want
to turn there computer on and play a game. To be fair, Linux is not
without a large number of free (both as in beer and as in speech)
games already, but the announcement of a major (are they? I actually
only heard of Valve and Steam because of the Linux announcement)
gaming company moving to the platform as seen by some as legitimizing
the OS to the masses. It certainly gives everyone something to talk

I consider myself more of a pragmatic when it comes to the
philosophical debate surrounding free (for those familiar, the debate
mostly deals with libre software. English has many deficiencies, one
of which is the multiple meanings of the word “free”. In general free
software supporters do support the idea of paying for software and
believe that people should be able to make money off of the software
they write). I think free software is a great ideal to strive for, and
certainly for mission critical software I believe it is important to
have the freedom to view and modify the source code. As I brought up
in vtcli earlier this semester, as an example it is important to have
the freedom to confirm that the incognito mode of your web browser
really is doing what it says it is and not storing or sharing your
browsing information (as an aside to that, I erroneously claimed that
Chrome was open source, it is not, however, it theoretically uses the
same code-base as Chromium, which is open source, and happens to be
the browser I use both when in Linux and OS X. I highly encourage any
users of Chrome to switch to Chromium for the open sourced goodness it
provides, including the ability to confirm that incognito mode really
is incognito). That being said, if there’s a great game I like I am
not terribly concerned with not being able to look at or distribute
the source code, though I certainly would encourage game developers to
release their code under one of the many open source licenses.

It is interesting to note that free software evangelist Richard
Stallman himself isn’t ALL doom and gloom about the news. Though he
certainly isn’t thrilled and encourages people to try out any of the
free games that are available, he does see the move as a possible
motivator for some people to ditch their non-free OSes completely if
gaming had been the only thing holding them back.

However, if you’re going to use these games, you’re better off using them on GNU/Linux rather than on Microsoft Windows. At least you avoid the harm to your freedom that Windows would do. – Richard Stallman

I installed Steam on my Arch Linux install last week and so far have
tried out Bastion, Splice and The World of Goo. All work very well
and have been fun (I had played World of Goo before both on OS X and
Android, it is fun on any platform!). Offically, Arch Linux isn’t
supported but after adding a couple of the libraries and font packages
mentioned on the wiki everything worked like a charm. One down side
that Stallman failed to mention in his response was the fact that it
is much easier for me to spend money on games now that I don’t need to
switch over to OS X to run them.

Git Games and Meta Moments

I had a bit of a meta moment while swimming today. I have a lot of
good moments while swimming, probably because it’s a chance for my
mind to wander. That’s probably a good argument to go more often than
I did this past week (1 out of 6 possible practice days!). But I

Yesterday I was introduced to a game of sorts to help learn some
concepts used by git. For those of you who don’t know, git is a
versioning control system that has gained quit a bit of popularity
over the past few years, especially in the open source community. I
had been using it myself for my own projects, but mainly at a very
simplistic level.

At one level, a versioning control system (VCS), of which git is
one of many, is a tool to facilitate documenting the changes
of… well, a document. Historically these systems were develped by
software designers to both document changes and provide an easy path
to revert to older versions of source code. Later, similar concepts
were implemented in modern word processors (with limited scope and
power due to the restrictive nature, essentially the traditional
method of tracking edits from the pen and paper days was ported over
to the electronic medium without much change).

One thing that became much more clear to me after trying out the git
game was that while providing logical “snapshots” of a project that
can be used as a return point if somethign goes astray in the future,
git is creating a history of the project, a history that tells a
story. But unlike other histories you may be familiar with, the
history generated by git can be rewritten to change the past.

What had alluded me up until this point was what motivation one might
have to rewrite history. I figured, you make changes, commit them to
the project, those changes get recorded, what more would you need?
Well, it turns out that with the ability to rewrite history, git makes
it incredibly easy to do certain types of edits on your data and
allows an author to use git more as a tool for trying out new,
possibly risky ideas, or take off on a tangent while always providing
a clear path back to a ground point.

The details of what these types of edits are are important, but after
I began writing them up I realized I was losing sight of my original
reason for writing this post! Luckily, I have been using git to track
changes to this document, and created a branch for each of the
examples I thought would be useful. I’m going to leave them out of
the final document for now, but they exist in my history, and since I
will post this project to github, you are free to take a look!

What I thought about while swimming, after the git game helped me
understand why rewriting history could be so useful, and how the
history itself could be used, was that since I’m using git to manage
the files for ECE2524, I could also use it guide future semesters of
the course. Every time I add a new set of lecture notes, or add a new
assignment, I make a commit to a git repo containing all the files
I’ve used so far. That is also recording the order in which topics
are introduced to the class, so I’m generating an outline for the
semester just by nature of using git for your regular garden variety
versioning control.

But I had an hour and a half to occupy my brain while I swam back and
forth, so the wheels kept turning. We use git for class, as those of
you in it know, because it is an important tool for software
development and happens to be a particularly Unix-y tool to boot. The
Unix-i-ness of git is something I will leave for discussion in class
tomorrow (oh, the suspense!). We use git, but it is a complicated
tool to learn, even though what it is doing is quite simple, once you
grok it, something that doesn’t always happen quickly, and never as
quickly as you would like.

But the information tracking ideas from git can be related to the
process we go through in class, which can also be related to the
discussion on working memory vs. long-term memory we had in vtlci. The
process of learning new things involves some experimentation and a lot
of data filtering. We have a lot of information available to us, the
culmination of which can be thought of as the contents of our “working
directory” in git terms. As we individually work through the
information and inspect it through our own lens we commit pieces of it
to our memory, our repository. Though we’re not actively doing it,
there is a log associated with this process. It is not as precise as
something stored on a computer, of course, but looking back on the
past few days we can recall things like “concept X made a lot more
sense to me after I understood hypothesis Y, which became clear after
working through exercise Z.”

What if we were more conscious of this process in class, and even made
an effort to map it more directly to the concept of using git? For
instance, one versioning control concept we’ll start to explore
tomorrow is branching and merging.

A branch can be thought of as a temporary deviation away from the main
story line. In fact, in my first paragraph I went off on a bit of a
tangent about my tendency to let my mind wander while swimming. That
could be thought of as a branch away from the main topic, which (I
promise) is about using git to map the journey we take to learn to use
git. In fact, I switched to a new branch in git when I began writing
those sentences, and then merged them back into the main conversation
when I was done.

What if the class were split up into groups, and each group worked on
one aspect of learning to use git’s branch and merge functionality.
For instances, Group 1 might play the git game, while group 2 might
read about how git represents and references data and what is going on
under the hood. At that point, the collective commit knowledge of the
class will have split into two branches. One branch with more of a
pragmatic grasp of “this is how I do a branch and merge” and another
branch with a better understanding of “this is how a branch and a
merge is implemented by git”. Then, the following week, both groups
would come back together and share what each learned. The two groups
will have just “merged” their knowledge and everyone should have a
better understanding of how to conduct a branch and a merge, and also
what is going on with the underlying data structure when they do one.

Oh, and by the way, when writing that last paragraph I created two new
branches: one named “group1” and it contained the description of what
the hypothetical group1 would do, and another called “group2” which
contained the sentance describing that group’s task. Then I merged the
two back into the master branch, reformated the paragraph and add a
summary. Check out this history on github!

So this whole process got me thinking. Does thinkging about meta
thoughts make it easier or more likely to think about meta thoughts in
the future? And likewise, does it make it easier to draw comparisons
between seemingly unrelated processes, such as learning new ideas, and
software development, when you have a process and a vocabulary to
describe the process of each? I am a strange loop.

Does awareness of our limitations aid in overcoming them?

A couple of thoughts have been bouncing around in my head while reading. First, while reading As We May Think by Bush, but repeatedly with other sources, I was reminded of a thought I often have when reading science ficture written in the 50s, around the same time Vannevar Bush wrote As We May Think. While on many levels the predictions of the future turned out to be quite accurate, there are notable exceptions that jump out at me while reading. A really good example that illustrates my point is Isaac Asimov’s Foundation series. The series is somewhat unique in that it covers a huge expanse of time in the fictional world, over 20,000 years if all the short stories and novels written by other authors after Asimov’s death are taken into account, and was written over several decades in standard non-fictional Earth time: the four stories that made up the first published book in the series were written between 1942 and 1944. Asimov thought he was done with the series after writing two stories in 1948 and 1949 and went on to do other things for 30 years. After much continued pressure from fans and friends he published the 6th book in the series in 1982 and the 7th in 1986.

Three things struck me while reading the first part of the series, written in the 40s and 50s:

  • It was generally assumed that nuclear power was the energy of the future. The logical extrapolation was nuclear powered wrist-watches (ok, actually, I did read a compelling article fairly recently revisiting micro-atomic generators using minuscule amounts of radioactive materials to agitate a pizo-electric element to produce electricity, so maybe this wasn’t so far off the mark)
  • While we would have space ships capable of faster-than-light travel (hyperspace!), the calculations to perform jumps and ensure that the trajectory didn’t travel too near the gravitational effects of a star were done by a human, by hand. Particularly long jumps took the better part of a day to calculate and verify before it was deemed safe to tell the ship to execute the maneuver which itself would only take a fraction of a second.
  • There were no women whatsoever in any type of leadership role. We could say the same of ethnic minorities, non-heterosexual and non-cisgendered people as well, but we will give Asimov the benefit of the doubt and acknowledge that the U.S. was (at least visibly) much less diverse than it is today. But surely he knew about the existence of women.

These are little things you get used to when reading science fiction of the time. I think perhaps most interesting is that while it is common to extrapolate technology into the future with reasonably accuracy, the social structures that will exist 10,000 years from now are remarkably similar to those of the current time, if science fiction authors have anything to say about it.

As I mentioned, the 6th book, Foundation’s Edge was published in 1982. Within the first page or so it was revealed without fanfare that the mayor of Terminus, politically the (quasi) central planet of The Foundation (despite it being on the outskirts of the colonized worlds), is currently a woman. Also, due to much research and development the latest spaceships have a new feature: hyperjumps are calculated in a matter of seconds by on-board computers. Also the old nuclear technology has been replaced by state-of-the-art zero-point-energy extraction (if I recall correctly, it’s been a while since I read the books!) providing a nearly unexhaustable energy source to power your jaunts around the universe.

The changes, while artfully worked into the narrative and coherently worked into the fictional universe that had first been described over 30 years prior, still jumped out at the casual reader. I bring this up by no means to diminish Asimov’s work, or him personally (I’m a huge fan, having read and enjoyed just about every book he’s written at this point), but rather to suggest that we has a species have some fundamental limitations in regards to predicting the future. We view the future through a lens designed by history and crafted in the present. While it is all too natural for us to extrapolate existing technology and social dynamics arbitrarily far into the future, and while that leads to some really fascinating scenarios, making significant conceptual leaps (such as the one Ada Lovelace is attributed to making) is something much more difficult and happens much less frequently.

What I wonder though, is after a long history of learning from our shortsightedness in some instances (and acknowledging our forsightedness in others), can we overcome this limitation? Are we now, compared to the 1950s, better able to make conceptual leaps and imagine technology and social structures that are fundamentally different from those of the present simply because we are aware that we tend to make certain kinds of assumptions? Why would a woman even WANT to be mayor of a politically powerful planet?

On Farming, the Internet and Funny Hats

This is a picture of me wearing a hat I made:

A “Scott Pilgrim” hat I made.

It was made from the same pattern used to make the hat used in the movie Scott Pilgrim vs. The World: The woman who did the work of adapting the hat drawn in the comic to something that could be made for a movie made her pattern available (for a small fee) on ravelry.com, a social network for knitters and crocheters.

I’m writing this post right after finishing a dinner which included mushroom leek risotto which I made while reading (risotto the real way involves a lot of stirring and pour in broth a little at a time) Bringing it to the Table by Wendell Berry. The book is a collection of essays Berry wrote over several decades on the topic of farming and food (Not entirely incidentally, Wendell Berry caused a stir and inadvertently started a flame war after writing his essay “Why I am Not Going to Buy a Computer” back in 1987). I ate my risotto out of a bowl that was hand made, though I don’t know by whom, that I picked out at the Empty Bowls charity event I attended on campus last semester. Along with the risotto I had some lentil soup (which I’m sorry to say only came from the organic section of Food Lion) served in a bowl that was hand made by a friend.

In his 1986 essay “A Defense of the Family Farm”, Berry says

As Gill says, “every man is called to give love to the work of his hands. Every man is called to be an artist.” The small family farm is one of the last places – they are getting rarer every day – where men and women (and girls and boys, too) can answer that call to be an artist, to learn to give love to the work of their hands. It is one of the last places where the maker – and some farmers still do talk about “making the crops” – is responsible, from start to finish, for the thing made. This certainly is a spiritual value, but it is not for that reason an impractical or uneconomic one.

People like to make things. We feel a deeper sense of connection to others when we use tools and wear clothing made by someone’s hands. In this essay Berry is cautioning against losing this rich tradition embodied in the family farm to the industrial agriculture complex. Now, in 2013, it is sad to say is cautionary foresight was well placed. Especially in the United States, and increasingly elsewhere as our “efficient” agricultural methods spread, we have become a society that is nearly thoroughly disconnected in all the ways that matter from the one thing that our very survival depends on: our food.

In his essay “As We May Think”, Bush asked “What are the scientists to
do next”. After the end of a scientific enlightenment of sorts, brought on by the War he asked if we could turn the tremendous scientific energy towards something more constructive. One of the many results of the technological advancements made during the war was a radical transformation in the way we grow (and subsequently think about) our food.

It had been know for some time that plants needed at least nitrogen, phosphorous and potassium (N-P-K) to grow (it turns out to grow well they need much more, but at the time, we were patting ourselves on the back for unlocking the mysteries of plant life). Once the war ended there was an abundance of nitrogen (a component of TNT) that needed to be put to good use. The need was so great that it was made available to farmers (in the form of ammonia) for cheap, so cheap that it made economic sense to switch to this commercial product instead of continue with the tried and true method of spreading manure.

Along with this change came others. Because synthetic fertilizers could be produced and transported and spread in large quantities, and due to changes in the Farm Bill to promote food security farm sizes grew and crop diversity shrank. With less diversity less skill was needed and the number of family farms in the U.S. dropped dramatically, from around 6 million immediately after WWII to just over 2 million in the early 1990s. Earlier in the same essay Berry writes

With industrialization has come a general deprication of work. As the price of work has gone up, the value of it has gone down, until it is now so depressed that people simply do not want to do it anymore. We can say without exaggeration that the present national ambition of the United States is unemployment.

This was 1987, remember. Our current job crisis is certainly more complicated than the loss of family farms, but with the destruction of 4 million family farms came the loss of at least twice that many skilled full-time jobs.

All in the name of industrial efficiency.

What’s interesting though is like Berry said, we like making things with our own hands. And we know we like making things with our own hands, we just haven’t had much reason to after industrialization was purported as a solution to all the drudgery involved in actually practicing a skilled craft.

But like me and my hat, eating home-cooked food out of hand-made bowls, food made with ingredients purchased directly from farmers, we haven’t yet completely lost all our skills, they’ve just become hidden. Something we practice in the privacy of our own home.

I am cautiously optimistic that yet another layer of technology may in many ways help us build a stronger craft-based economy. Sites like Etsy have given artisans and people wanting to buy artisanal products a means to connect directly, without going through a middleman, eliminating an undesirable layer of indirection between the products we use and the people who made them.

Can the Internet help us reconnect with what we truly value: each other?

Stranger in a Commonplace Land

As I began reading the two introduction essays by Janet Murray and Lev Manovich to The New Media Reader I first was a bit overwhelmed with the length of each.  This immediately made me think of an article that was reverenced in the previous reading, “Is Google Making us stupid?“: was the fact that I initially gawked at so many words and pages a result of my immersion in a world of near-instant informational gratification and 140 character thoughts? The thing is, I have no problems whatsoever reading a 500 page novel, if it’s interesting and indeed there were certainly pieces of each introduction piece that jumped out at me:

All creativity can be understood as taking in the world as a problem. The problem that preoccupies all of the authors in this volume is the pullulating consciousness that is the direct result of 500 years of print culture. – Janet Murray

The concept of defining a unifying model that describes all of creativity is quite appealing to me.  “The world as a problem” seems at the same time both a grossly over simplified, and a perfectly succinct description of creativity  as I see it, and particular to my field of engineering.  Murray than goes on to draw contrasts between “engineers” and “disciplinary humanists” which particularly piqued my interest because I often feel like an outsider looking in when talking to other engineers about humanistic concepts, but also an outsider when trying to explain how I see engineering to “disciplinary humanists”.   The second essay   provided a nugget that helped direct my thoughts on this curious feeling of duplicity

Human-computer interface comes to act as a new form through which all older forms of cultural production are being mediated. – Lev Manovich

Whether we like it or not, this is becoming the reality.  We now get our books, music, movies and even long distance personal interaction mediated by a computer and the interface they provide us.  The thing is, any good engineer knows that if a piece of technology is doing its job, it should be transparent to the user.  While reading both of these essays I found myself thinking: why are we trying to force so much focus on the “new” in “new media”?  Is our doing so an indication that we as engineers still have more work to do to make the current technology transparent (I think we do) or is society so transfixed by “new” technology for some other reason that we are refusing to let it become as transparent as it could be?

Manovich, I think would disagree on that point, at least in the U.S. as one of his arguments for the late start of new media exhibits in the U.S. was in part do to the rapid assimilation of new technology so that it became ubiquitous before we had time to reflect upon its potential impacts.  As I’m writing that I feel myself rethinking my own view, because I don’t want to suggest that we not reflect upon the impact of technology that we now take for granted, in fact I have often felt we need to do much more reflecting, and I agree wholeheartedly that we have adopted some technologies that have drastically changed our day-to-day lives (who plans things in advance any more when you can just text your friends last minute to find out where people are?) that may consequences far extending the superficial sphere of their direct influences (if we don’t plan our days, are we losing our skill at thinking into the future and acting accordingly in general? Are we becoming a species obsessed with living in the moment and unable to live any other way?)

I’m in danger of rambling now, but I now have a better understanding of why I found it difficult to focus on the entirety of both essays.  Everything around each nugget either seemed redundant, overly descriptive, or a distraction from the thought process that had started forming in my head.  If good technology should be transparent to the user, why are we spending so much time worrying about it? And what are the consequences if we don’t?

It’s a Feature, not a Bug

In his article The internet: Everything you ever need to know, John Naughton lists nine key concepts about the Internet to help us understand that profound impact it is having, and will continue to have, on our lives.  Reading number 3 “DISRUPTION IS A FEATURE, NOT A BUG” I found myself drawing parallels to the design of the Internet and the design of the Unix operating system. The similarities are no accident, as the history of Unix and the Internet became closely intertwined after DARPA’s 1980 decisions that the BSD Unix team would implement the brand new TCP/IP stack which controls how data packets are routed between machines on the Internet.

Continue reading

Semester in Review

Well, as I’m about 6 hours in* into a 14 hour bus+train journey to Massachusetts I figured this would be a good time to reflect and respond to the past semester which seems to have flown bye.

The Blogs

I really enjoyed the blog assignment. Even though I wasn’t able to write a reply to every post I felt a lot more in sync with how the class as a whole was progressing. When there was confusion or frustration regarding a particular assignment, or just towards the class in general, I was able to respond quickly (I hope!). I feel I learned much more about how the material in ECE2524 was connected both to other courses and to events that interested you outside of coursework (open source gaming, personal server setups, commentary on Ubuntu as a general purpose OS).

There are a couple things I plan to change with the blog assignment with the end goal of adding a little more structure to the syndicated class blog, and hopefully encouraging more of a discussion.

  • enforce “category” and “tag” rules. If you look down the right sidebar of the mother blogyou will see a list of all the categories posts have been made under. The current list is too long and not focused enough to be of any amount of use to someone trying to sift through the many posts for a particular topic. Most of the words used for “categories” should have been “tags” instead, so spending a little time up front talking about the difference I think would be helpful in the long-term organization and usefulness as an archival tool of the blog. Some categories I’ve thought of are:
    • Introspective: reflect on the course itself, whether it be assignments, discussions or structure.
    • Extrospective: explore connections between course material and using *nix systems or applying Unix design philosophy to other courses or events.
    • Social Network: comment on and continue the discussion taking place at VTLUUG and VTCSEC meetings.
    • Instructional: Discussing personal setups and/or workflows. Posts here will have sort of a “tutorial” or “howto” feel.

    There are a couple optional assignments I want to offer that would be linked to blog posts:

    • Learn a Language: There are many benefits to learning a new programming language. From The Pragmatic Programmer, Tip #8 “Invest Regularly in Your Knowledge Portfolio”:
    • Learn at least one new language every year. Different languages solve the same problems in different ways. By learning several different approaches, you can help broaden your thinking and avoid getting stuck in a rut. Additionally, learning many languages is far easier now, thanks to the wealth of freely available software on the Internet

      Throughout the semester those opting to do this assignment would document their progress with their language of choice and share any new ways of thinking or problem solving gained by thinking outside their language comfort zone.

    • Explore an Environment: An assignment suggested (I need to go through the list to recall who made it) has participants try out an alternative desktop environment and/or window manager. Learners participating in this assignment would make regular blog posts documenting their experience with a particular DE.
    • VTLUUG/VTCSEC: There were some issues with the attendance implementation at VTLUUG (in particular) and VTCSEC meetings that frustrated a lot of people and made my life a little more difficult. In addition, an attendance count isn’t really a good metric for the success of this assignment since the purpose isn’t simply to sit in a room for an hour, but to engage with a larger community. Next semester credit will be counted towards the VTLUUG/VTCSEC assignment for blog posts containing targeted discussion and thoughts of the specific topics covered at each meeting.


I noticed several people commented that the Inventory Management assignment was about the time when python and the motivation behind assignments started to “click”. I don’t mind that it takes a few assignments in before connections start clicking, but I would like to try and provide more motivation up front about where each assignment is headed, so that earlier along there is at least a notion of “this is going somewhere”. So I’ve been penciling out a clear, focused progression of assignments that goes from basic text parsing up to something like Inventory Management. That project in particular I am also going to make into a group project so that there is some exposure to using git as a collaborative tool before the final project. It also easily breaks up into sub-modules:

  • Data Parser
  • Command Parser
  • Controller

As the name implies the two parsers make use of text parsing concepts, while the controller is more of an exercise in logical program flow. I think with clear enough specs on what the internal data structures should look like, the three parts should be able to be written mostly independently and then combined into one project.

I would also like to start C/C++ development earlier in the semester. I am going to try and restructure exercises and lecture slides so that C/C++ and Python assignments are interwoven throughout the semester. I hope that this will prevent the feeling that I got that the semester was split into two distinct phases, the “python” phase and “C++” phase. That way the content can follow a logical flow and touching on the merits of each language. A brief example of what I’m thinking about:

  • simple line parsing (one primitive type, e.g. double/int per line)
    • in python
    • in bash
    • in C++
  • processing command line arguments
    • in python
    • in bash
    • in C++
  • parsing text lines into an array structure
    • you get the picture
  • parsing text lines into a hierarchical structure (e.g. command parser)
    • probably drop bash for this case
  • manipulating lists
    • python list comprehension
    • C++ stl algorithms
  • Inventory Management (python)

And I am toying with the idea creating a similar progression (overlapping mostly) that will cover fork/exec, basic IPC with pipe and lead to a simple shell. As I mentioned in the “Think about it” of the pipeline assignment, all were missing to create a basic shell program was a string parser that would parse something like “generator | consumer” into an array. Along those lines, I may adjust example code in the “Make a Makefile” assignment to use flex/bison to generate a simple command parser instead of an arithmetic parser.

As those of you familiar with bash are aware, as the complexity of the algorithms and data structures we work with increase, at some point bash will become overly cumbersome. At this point, it will be relegated to the task of writing unit tests of sorts for each assignment (Thanks to George for the suggested assignment.) This will make bash a more integral part of the course material, there was a notable lack of bash this past semester, which I regret.

Classroom Time

I’ve been doing a lot of thinking about how to use the classroom time effectively in a way that makes everyone want to come. I think it’s really important that everyone shows up regularly, not just those that feel they need some extra guidance, but also those who have been programming in a *nix environment for 10 years. It’s really important because both the novice and expert can learn a lot from each other if they’re in the same room. It also makes my job easier. There are 60 people enrolled in the class in the Spring: it will be nearly impossible for me to check with everyone individually every time there is a typo in an entered command. Getting a second set of eyes looking at everyone’s commands and code will help people avoid extended debugging sessions and make people more aware of common typos and bugs. To that end I would like to do more collaborative discussions in the classroom, and less of me talking. Regarding assignments, I’d like them due and committed to a network accessible git repo at the beginning of class. Then, in class people will pair up, fork each others’ assignment, review, make edits, and initiate a pull request so that the original author can merge in any bug fixes. The grade for the assignment will be determined by a combination of the functionality of the original commit and the merged changes. This probably won’t take place after every assignment, but at least a view of them.

Depending on how efficient we become at fork/review/merge, I’d like to have more discussions like the one we had about the Process Object assignment. I will try to come up with 3 or 4 “make you think” type questions for each assignment and then in class break up into groups, each discussing one question in depth, then come together as a full class and share the response each group had.


I think this post turned into more of a “What I plan to do next semester” more than the reflection I had intended. Because it’s probably already too long I’ll try and come to a close. The first semester I taught this course I pretty much followed the supplied lecture slides and exercises that were given to me. The second semester suffered from “all this stuff should be changed but I don’t have any rhyme or reason to it” syndrome (not unlike second system syndrome that Raymond talks about with regard to Multix). The next couple semesters, ending on the most recent, I have been tweaking and polishing and streamlining. There were still some bumps this past semester that I would like to eliminate (issues with VTLUUG attendance, problems submitting the midterm, lack of clarity on some of the assignments, much too long a delay on returning some of the graded assignments, to name a few), but I’m optimistic that the next revision will address many of them and hopefully provide a smoother and more enjoyable experience for all. Remind me to write another post about my vision for the class 🙂

*and now I’m 10 hours in… only 4 more to go!

Re: the little things of ubuntu

In a recent post thomaswy mentioned some things he liked about the CLI in Ubuntu (Linux in general, running bash in any distribution should yield an extremely consistent experience) and some things he disliked about the GUI. He’s not alone, just do a quick google search for “what I hate about Ubuntu Unity”.  Luckily, there are numerous ways to resolve this.  If you read the “Futures” chapter and other bits about the X-windows system in The Art of Unix Programming you learned that to remain consistent with the Unix design philosophy the designers of X created a clear separation between policy and mechanism.  A result of this is several graphical toolkits available to developers who want to create a GUI, and a result of *this* is many different GUI environments.  Unity is but one of them and just because it comes packaged with Ubuntu doesn’t mean that’s all Ubuntu can use.  If you aren’t in love with Unity, consider some of the alternatives:

Alternatives to Unity

and because it didn’t make it onto the previous list:


And that is but a small sampling of the graphical environments available for Linux.  A more complete list quickly becomes overwhelming

21 of the Best Free Linux Window Managers

and that still doesn’t include the one I use, i3.

It’s easy to see why Ubuntu, a distribution aimed at the casual user, would opt not to emphasize the amount of choices you have when it comes to picking a graphical environment!

And then many of the environments are further configured through themes and settings to control the look and feel and behavior for events like “click on a minimized window”.  Yes, you can easily spend a day or more finding and configuring the “perfect” desktop.  But that’s what makes Linux fun 😉