And Now You’re an Anstronaut: Open Source Treks to The Final Frontier

There have been a couple of blog posts recently referencing the recent switch NASA made from Windows to Debian 6, a GNU/Linux distribution, as the OS running on the laptops abord the International Space Station. It’s worth noting that Linux is no stranger to the ISS, as it has been a part of ground control operations since the beginning.

The reasons for the space-side switch are quoted as

…we needed an operating system that was stable and reliable — one that would give us in-house control. So if we needed to patch, adjust, or adapt, we could.

This is satisfying to many Open Source/Linux fans in it’s own right: a collaborative open source project has once again proved itself more stable and reliable for the (relatively) extrodinary conditions of low Earth orbit than a product produced by a major software giant. Plus one for open source collaboration and peer networks!

But theres another reason to be excited. And it’s a reason that would not necessarily applied (mostly) to, say, Apple fanatics had NASA decided to switch to OS X instead of Debian. And that reason has to do with the collaborative nature of the open source movement, codified in many open source licenses under which the software is released. Linux, and the GNU tools, which together make up a fully functional operating system, are released under the GNU General Public License. Unlike many licenses used for commersial software, the GPL esures that software licenses under its terms remains free for users to use,modify and redistribute. There are certainly some strong criticisms and ongoing debate regarding some key aspects of the GPL, especially version 3, the point of contention mostly lies in what is popularly called the “viral” effect of the license: that modified and derived work must also be released under the same license. The GPL might not be appropriate for every developer and every project, but it codifies the spirit of open source software in a way that is agreeable with many developers and users.

So what does this all mean in terms of NASA’s move? We already know that they chose GNU/Linux for its reliability and stability over alternatives, but that doesn’t mean it’s completely bug free, or will always work perfectly with every piece of hardware, which after all is another reason for the switch: no OS will be completely bug free or always work with all hardware, but at least Debian gives NASA the flexibility of making improvements themselves. And there in lies the reason for excitement. While there is no requirement that NASA redistribute their own modified versions of the software, there is no reason to assume they wouldn’t in most cases, and if they do, it will be redistributed under the same license. It’s certainly realistic to expect they will be directing a lot of attention to making the Linux kernel, and the GNU tools packaged with Debian even more stable and more reliable, and those improvements will make their way back into the general distributions that we all use. This means better hardware support for all GNU/Linux users in the future!

And of course it works both ways. Any bug fixes you make and redistribute may make their way back to the ISS, transforming humanity’s thirst for exploring “the final frontier” into a truly collaborative and global endeavor.

Blasphemy?: DRMed Games on Linux

The interwebz have been all atwitter the past month or so with Valve’s announcement of the porting of their Steam gaming service to the
GNU/Linux platform. Many Linux users were thrilled about the
announcement and saw it as a sign that Linux was breaking out of the
small niche culture of hackers to more mainstream folk who just want
to turn there computer on and play a game. To be fair, Linux is not
without a large number of free (both as in beer and as in speech)
games already, but the announcement of a major (are they? I actually
only heard of Valve and Steam because of the Linux announcement)
gaming company moving to the platform as seen by some as legitimizing
the OS to the masses. It certainly gives everyone something to talk

I consider myself more of a pragmatic when it comes to the
philosophical debate surrounding free (for those familiar, the debate
mostly deals with libre software. English has many deficiencies, one
of which is the multiple meanings of the word “free”. In general free
software supporters do support the idea of paying for software and
believe that people should be able to make money off of the software
they write). I think free software is a great ideal to strive for, and
certainly for mission critical software I believe it is important to
have the freedom to view and modify the source code. As I brought up
in vtcli earlier this semester, as an example it is important to have
the freedom to confirm that the incognito mode of your web browser
really is doing what it says it is and not storing or sharing your
browsing information (as an aside to that, I erroneously claimed that
Chrome was open source, it is not, however, it theoretically uses the
same code-base as Chromium, which is open source, and happens to be
the browser I use both when in Linux and OS X. I highly encourage any
users of Chrome to switch to Chromium for the open sourced goodness it
provides, including the ability to confirm that incognito mode really
is incognito). That being said, if there’s a great game I like I am
not terribly concerned with not being able to look at or distribute
the source code, though I certainly would encourage game developers to
release their code under one of the many open source licenses.

It is interesting to note that free software evangelist Richard
Stallman himself isn’t ALL doom and gloom about the news. Though he
certainly isn’t thrilled and encourages people to try out any of the
free games that are available, he does see the move as a possible
motivator for some people to ditch their non-free OSes completely if
gaming had been the only thing holding them back.

However, if you’re going to use these games, you’re better off using them on GNU/Linux rather than on Microsoft Windows. At least you avoid the harm to your freedom that Windows would do. – Richard Stallman

I installed Steam on my Arch Linux install last week and so far have
tried out Bastion, Splice and The World of Goo. All work very well
and have been fun (I had played World of Goo before both on OS X and
Android, it is fun on any platform!). Offically, Arch Linux isn’t
supported but after adding a couple of the libraries and font packages
mentioned on the wiki everything worked like a charm. One down side
that Stallman failed to mention in his response was the fact that it
is much easier for me to spend money on games now that I don’t need to
switch over to OS X to run them.

Does awareness of our limitations aid in overcoming them?

A couple of thoughts have been bouncing around in my head while reading. First, while reading As We May Think by Bush, but repeatedly with other sources, I was reminded of a thought I often have when reading science ficture written in the 50s, around the same time Vannevar Bush wrote As We May Think. While on many levels the predictions of the future turned out to be quite accurate, there are notable exceptions that jump out at me while reading. A really good example that illustrates my point is Isaac Asimov’s Foundation series. The series is somewhat unique in that it covers a huge expanse of time in the fictional world, over 20,000 years if all the short stories and novels written by other authors after Asimov’s death are taken into account, and was written over several decades in standard non-fictional Earth time: the four stories that made up the first published book in the series were written between 1942 and 1944. Asimov thought he was done with the series after writing two stories in 1948 and 1949 and went on to do other things for 30 years. After much continued pressure from fans and friends he published the 6th book in the series in 1982 and the 7th in 1986.

Three things struck me while reading the first part of the series, written in the 40s and 50s:

  • It was generally assumed that nuclear power was the energy of the future. The logical extrapolation was nuclear powered wrist-watches (ok, actually, I did read a compelling article fairly recently revisiting micro-atomic generators using minuscule amounts of radioactive materials to agitate a pizo-electric element to produce electricity, so maybe this wasn’t so far off the mark)
  • While we would have space ships capable of faster-than-light travel (hyperspace!), the calculations to perform jumps and ensure that the trajectory didn’t travel too near the gravitational effects of a star were done by a human, by hand. Particularly long jumps took the better part of a day to calculate and verify before it was deemed safe to tell the ship to execute the maneuver which itself would only take a fraction of a second.
  • There were no women whatsoever in any type of leadership role. We could say the same of ethnic minorities, non-heterosexual and non-cisgendered people as well, but we will give Asimov the benefit of the doubt and acknowledge that the U.S. was (at least visibly) much less diverse than it is today. But surely he knew about the existence of women.

These are little things you get used to when reading science fiction of the time. I think perhaps most interesting is that while it is common to extrapolate technology into the future with reasonably accuracy, the social structures that will exist 10,000 years from now are remarkably similar to those of the current time, if science fiction authors have anything to say about it.

As I mentioned, the 6th book, Foundation’s Edge was published in 1982. Within the first page or so it was revealed without fanfare that the mayor of Terminus, politically the (quasi) central planet of The Foundation (despite it being on the outskirts of the colonized worlds), is currently a woman. Also, due to much research and development the latest spaceships have a new feature: hyperjumps are calculated in a matter of seconds by on-board computers. Also the old nuclear technology has been replaced by state-of-the-art zero-point-energy extraction (if I recall correctly, it’s been a while since I read the books!) providing a nearly unexhaustable energy source to power your jaunts around the universe.

The changes, while artfully worked into the narrative and coherently worked into the fictional universe that had first been described over 30 years prior, still jumped out at the casual reader. I bring this up by no means to diminish Asimov’s work, or him personally (I’m a huge fan, having read and enjoyed just about every book he’s written at this point), but rather to suggest that we has a species have some fundamental limitations in regards to predicting the future. We view the future through a lens designed by history and crafted in the present. While it is all too natural for us to extrapolate existing technology and social dynamics arbitrarily far into the future, and while that leads to some really fascinating scenarios, making significant conceptual leaps (such as the one Ada Lovelace is attributed to making) is something much more difficult and happens much less frequently.

What I wonder though, is after a long history of learning from our shortsightedness in some instances (and acknowledging our forsightedness in others), can we overcome this limitation? Are we now, compared to the 1950s, better able to make conceptual leaps and imagine technology and social structures that are fundamentally different from those of the present simply because we are aware that we tend to make certain kinds of assumptions? Why would a woman even WANT to be mayor of a politically powerful planet?

On Farming, the Internet and Funny Hats

This is a picture of me wearing a hat I made:

A “Scott Pilgrim” hat I made.

It was made from the same pattern used to make the hat used in the movie Scott Pilgrim vs. The World: The woman who did the work of adapting the hat drawn in the comic to something that could be made for a movie made her pattern available (for a small fee) on, a social network for knitters and crocheters.

I’m writing this post right after finishing a dinner which included mushroom leek risotto which I made while reading (risotto the real way involves a lot of stirring and pour in broth a little at a time) Bringing it to the Table by Wendell Berry. The book is a collection of essays Berry wrote over several decades on the topic of farming and food (Not entirely incidentally, Wendell Berry caused a stir and inadvertently started a flame war after writing his essay “Why I am Not Going to Buy a Computer” back in 1987). I ate my risotto out of a bowl that was hand made, though I don’t know by whom, that I picked out at the Empty Bowls charity event I attended on campus last semester. Along with the risotto I had some lentil soup (which I’m sorry to say only came from the organic section of Food Lion) served in a bowl that was hand made by a friend.

In his 1986 essay “A Defense of the Family Farm”, Berry says

As Gill says, “every man is called to give love to the work of his hands. Every man is called to be an artist.” The small family farm is one of the last places – they are getting rarer every day – where men and women (and girls and boys, too) can answer that call to be an artist, to learn to give love to the work of their hands. It is one of the last places where the maker – and some farmers still do talk about “making the crops” – is responsible, from start to finish, for the thing made. This certainly is a spiritual value, but it is not for that reason an impractical or uneconomic one.

People like to make things. We feel a deeper sense of connection to others when we use tools and wear clothing made by someone’s hands. In this essay Berry is cautioning against losing this rich tradition embodied in the family farm to the industrial agriculture complex. Now, in 2013, it is sad to say is cautionary foresight was well placed. Especially in the United States, and increasingly elsewhere as our “efficient” agricultural methods spread, we have become a society that is nearly thoroughly disconnected in all the ways that matter from the one thing that our very survival depends on: our food.

In his essay “As We May Think”, Bush asked “What are the scientists to
do next”. After the end of a scientific enlightenment of sorts, brought on by the War he asked if we could turn the tremendous scientific energy towards something more constructive. One of the many results of the technological advancements made during the war was a radical transformation in the way we grow (and subsequently think about) our food.

It had been know for some time that plants needed at least nitrogen, phosphorous and potassium (N-P-K) to grow (it turns out to grow well they need much more, but at the time, we were patting ourselves on the back for unlocking the mysteries of plant life). Once the war ended there was an abundance of nitrogen (a component of TNT) that needed to be put to good use. The need was so great that it was made available to farmers (in the form of ammonia) for cheap, so cheap that it made economic sense to switch to this commercial product instead of continue with the tried and true method of spreading manure.

Along with this change came others. Because synthetic fertilizers could be produced and transported and spread in large quantities, and due to changes in the Farm Bill to promote food security farm sizes grew and crop diversity shrank. With less diversity less skill was needed and the number of family farms in the U.S. dropped dramatically, from around 6 million immediately after WWII to just over 2 million in the early 1990s. Earlier in the same essay Berry writes

With industrialization has come a general deprication of work. As the price of work has gone up, the value of it has gone down, until it is now so depressed that people simply do not want to do it anymore. We can say without exaggeration that the present national ambition of the United States is unemployment.

This was 1987, remember. Our current job crisis is certainly more complicated than the loss of family farms, but with the destruction of 4 million family farms came the loss of at least twice that many skilled full-time jobs.

All in the name of industrial efficiency.

What’s interesting though is like Berry said, we like making things with our own hands. And we know we like making things with our own hands, we just haven’t had much reason to after industrialization was purported as a solution to all the drudgery involved in actually practicing a skilled craft.

But like me and my hat, eating home-cooked food out of hand-made bowls, food made with ingredients purchased directly from farmers, we haven’t yet completely lost all our skills, they’ve just become hidden. Something we practice in the privacy of our own home.

I am cautiously optimistic that yet another layer of technology may in many ways help us build a stronger craft-based economy. Sites like Etsy have given artisans and people wanting to buy artisanal products a means to connect directly, without going through a middleman, eliminating an undesirable layer of indirection between the products we use and the people who made them.

Can the Internet help us reconnect with what we truly value: each other?

It’s a Feature, not a Bug

In his article The internet: Everything you ever need to know, John Naughton lists nine key concepts about the Internet to help us understand that profound impact it is having, and will continue to have, on our lives.  Reading number 3 “DISRUPTION IS A FEATURE, NOT A BUG” I found myself drawing parallels to the design of the Internet and the design of the Unix operating system. The similarities are no accident, as the history of Unix and the Internet became closely intertwined after DARPA’s 1980 decisions that the BSD Unix team would implement the brand new TCP/IP stack which controls how data packets are routed between machines on the Internet.

Continue reading

The Tides of Change?

As Linus Torvalds has mentioned in several video interviews, probably the main reason Linux has been lagging behind in the desktop market is that it doesn’t come pre-installed on desktop hardware, and the average computer user just isn’t going to put forth the effort to install a different operating system and configure it* than came with their new machine. Recently Dell caused a bit of excitement with their release of an Ubuntu addition of their “XPS 13: The Ubuntu developers” edition laptop.  To be fair, this is not the first machine that Dell has offered with Linux pre-installed, but it does seem to be the first that they’ve tried pushing to the mainstream (or in this case, developer) community (in the past you really had to make an effort to find the Ubuntu option on their ordering form).  Dell is also not the only desktop distributor to offer systems with Linux pre-loaded (indeed, many of the others exclusively offer Linux machines), but it is probably the brand with the most name recognition to the general audience.  Could this be the beginning of the end of the Microsoft monopoly on the desktop OS market?  I am optimistic!

*Be wary of the blog posts and forum comments that recount stories of installing Linux and being frustrated with the difficulty of getting all the necessary drivers for their hardware and using that as an argument that the OS wasn’t “ready” for prime time.  If you have ever installed Windows on a fresh new machine you will be well aware that it can be just as frustrating.  Windows doesn’t “just work” on the machines you buy because it is a superior OS (it isn’t), it works because the system distributors like Dell take the time to make sure that the necessary drivers for the particular hardware in the machine are all included.

Rule of Diversity: Distrust all claims for “one true way”.

I’ve been programming simulations and algorithms in C++ for several years now, but it’s only been in the last year or so that I’ve really began to appreciate the advantages of diversifying ones language repertoire.  My conversion began after reading The Art of Unix Programming by Eric Raymond last year while exploring new material for ECE2524.  In the book Raymond lays out a list of design rules making up the Unix philosophy and explains how following these rules has produced clean, powerful, maintainable code and been the reason why Unix has so easily evolved and adapted and flourished in the fast paced world of technology from it’s roots in 1969 running on under-powered hardware, even for the time.

The Rule of Diversity appears towards the end of the list, but is interesting in that I feel it is one of the few rules that few people, curriculums or corporations outside of the Unix community have yet to seriously embrace.  I hope that people with a Computer Science background can contribute their own view, but in my experience with the limited software design instruction in my Engineering curriculum, and talking to several other people, the focus has been strictly on Object Oriented Programming (OOP), usually using Java or C++.

As a result of this focus on OOP, programmers (including my past self) are encouraged to  adopt a programming paradigm that may work well for some problems, but not well for others.  I’ve learned from first hand experience that forcing an OOP framework on a problem that doesn’t really lend itself intuitively to the features of the framework (data encapsulation, inheritance) leads to an impossible-to-maintain mess consisting of many layers of brittle “glue” code and a spiting headache as soon as concurrent processing is thrown into the mix.

Discovering the Rule of Diversity was refreshing to me for many reasons, as I tend to reject dogma of any kind, but hadn’t really been made aware of the alternatives when it came to programming.  This process lead to me learning Python, which I now use to do the majority of my data visualization, and becoming interested in Ruby for web development and Haskell to learn about functional programming and how I might employ it to more elegantly implement the mathematical algorithms that are a large part of my field of study (Control Systems).

When a friend of mine recommended Seven Languages in Seven Weeks by Bruce A. Tate I was intregued by the idea of learning 7 different programming languages, along with their strenghts, weaknesses, histories and accompanying programming philosophies.  When I found out that two of the languages covered were Ruby and Haskell I was sold.

I have decided to work my way through the book over the next 7 weeks, and write about my experience with each language.  So far I really enjoy the structure of the book, the motivations of the author, and in particular his method of associating each language with a unique fictional character:

  • Mary Poppins from Mary Poppins (1964) – (Ruby) because unlike other nannies of the time she “made the household more efficient by making it fun and coaxing every last bit of pasion from her charges.” And wasn’t afraid to use a little magic to accomplish her goals.
  • Ferris Bueller  from Ferris Bueller’s Day Off (1986) – (Io) “He might give you the ride of your life, wreck your dad’s car, or both.  Either way, you will not be bored.”
  • Raymond from Rain Man (1988) – (Prolog) “He’s a fountain of knowledge, if you can only frame your question in the right way.”
  • Edward Scissorhands from Edward Scissorhands (1990) – (Scala) “He was often awkward, was sometimes amazing, but always had a unique expression.”
  • Agent Smith from The Matrix (1999) – (Erlang) “You could call it efficient, even brutally so, but Erlang’s syntax lacks the beauty and simplicity of, say, a Ruby.”  “Agent Smith … had an amazing ability to take any form and bend the rules of reality to be in many places at once. He was unavoidable.”
  • Yoda from Star Wars: Episode V – The Empire Strikes Back (1980) – (Clojure) “His communication style is often inverted and hard to understand”, “he is old, with wisdom that has been honed by time … and tried under fire.”
  • Spock from “Star Trek” (1966) – (Haskell) “…embracing logic and truth. His character has a single-minded purity that has endeared him to generations.”

Even though I’ve only began working through the exercises in the Ruby section, I’ve already developed an intuition about the “personality” of each of these languages through Tate’s analogies.  He has also chosen the perfect level of detail, not spending any time on the details of the syntax, or building up canonical examples, only focusing on what makes each language unique and powerful and expecting the reader to explore on his or her own to fill in the gaps.

I’ve almost worked through the Ruby section, so expect a post about that soon.  In the mean time, how do you feel about the Rule of Diversity?  Have you been offput when teachers/mentors/bosses treat a particular idea/framework/concept as “The One True Way”?

Industrialized Learning: Knowledge != Information

To comment on Dan’s post titled Industrialized Learning?? I agree the the industrialization of the search for knowledge is a scary thing indeed, and in many respects the structure of our education system has suggested a trend in that direction (for more on that, watch this excellent video).  However, I don’t think that is necessarily Google’s goal.  As Carr mentioned, Google’s mission is “to organize the world’s information and make it universally accessible and useful.”  Organizing information, and creating tools to systematically archive and find information is not the same as industrializing the search for knowledge.  In fact, I would argue that the search for knowledge benefits if all the world’s existing information is organized in a systematic way to make it easy for anyone to access it.  Libraries have had a similar goal long before Google came around.

Now certainly, the way our information is organized and the way we search for it does change the way we think about information, I think that was one of the key points in Carr’s article.  However, changing the way we think in and of itself isn’t necessarily a bad thing, and it is in fact a continuous process that has been a reality since we were able to stand upright freeing our hands for tasks other than mobility.

To be clear, I am not suggesting we give Google (or any other organization that deals with our information) carte blanche when it comes to the handling of the world’s information.  However, it is important to keep in mind that while information is a component of knowledge, information alone does not define knowledge.  The fear that the standardization and systematization of tasks will turn humans into mindless automatons is certainly something to think about and there is plenty of evidence from the Industrial Revolution that that is indeed the case.  However, the same economic forces that favor standardizing and systematization of a task also favor replacing a human automaton with a robot designed to complete that task.  Of course, currently we are facing a new problem as a result: how do we employ all these people who’s jobs have been replaced with robots?  We definitely need to have that discussion, but I don’t think the answer is to fight the systematization of tasks and put people back in those jobs, but rather to focus on what we are still better at than any  algorithm: creative thinking and imagination, and the search for knowledge.

The Freedom to be “Technologically Elite”

Also in response to Kim’s recent post.  I think the conversation about access and the issue of digital inclusion is a very important one to have, and we need to continue to be aware of how the tools and technologies we use may include or exclude people.  I would like to talk a bit about the concept of the “technological elite” that Kim brought up.  It’s important to be aware that it is possible for an elite minority to control the tools the majority comes to depend and rely on, but that is not currently the case, and in fact, I would argue that it will only become a reality if the majority allows it to happen.  As Jon Udell mentioned in his conversation the Internet itself has always been and continues to be an inherently distributed technology.  No single organization, whether it be corporate or governmental, owns or controls it.  There have been attempts, and there will continue to be attempts to restrict freedoms and tighten control, like the recent attempted SOPA/PIPA legislation, but it is our responsibility to continue to be aware of those attempts and fight them.

Many popular software tools in use, including WordPress which powers this blog, are free and open source.  This means that anyone can take a look at the source code to learn what is going on behind the scenes, and in many cases, modify and improve that tool for your own or public use.  The language that WordPress is written in, PHP is not only open-source, but there are a plethora of free tutorials online for anyone interested in learning how to program in the language.  The database used by WordPress to store and retrieve content, MySQL is currently open source, though the project itself was originally proprietary (Another relational database management system, PostgreSQL, has been open source for the entirety of its live-time and in many cases can be used as a drop-in replacement for MySQL). The majority of servers powering the internet run some version of the Linux operating system, itself freely available and open source.

Each of these projects, at the various layers the build up to form the tools that we use are generally well documented with enough information freely available to allow anyone who wants to become an expert in their use and design.  Now of course, not everyone will become an expert, and they experts for any one project are not necessarily experts in any other.  But specialization has allowed us to advance as a society in a way that would not be possible without it.

And because I love food:

When many of us began specializing in fields that did not involve agriculture and food production, we became dependent on those who did for our very survival.  Yet I can’t remember the last time I’ve heard anyone call farmers members of the “Agricultural Elite”.  Like the Internet tools I’ve mentioned, any of us have the agency to become experts in farming if we so choose.