Humans in the loop

Today’s hot article in the local twitterverse is a New York Times piece called Algorithms Get a Human Hand in Steering Web. I discovered it from a tweet by @GardnerCampell, also a beautiful retweet by @mzphyz:

Above all: Algorithms are human constructs, embodiments of our thought & will.

Which really sums up this entire post, so for the TL;DR crowd, you can stop reading right now!

The article mentions a number of examples of human-in-the-loop algorithms currently being employed on the internet, notably in Twitter’s search results and Google’s direct information blurbs (not sure what they call them, those little in-line sub-pages that show up for certain search terms, like a(ny) specific U.S. president, for example).

What I found interesting was that the tone of the article seemed to suggest that the tasks humans were doing as part of the human-algorithm hybrid system were somehow fundamentally unique to our own abilities, something that computers just could not do. I’m not sure if this was then indented tone, but either way, I found myself disagreeing.

Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them.

True, but I would argue that our own brains are “literal-minded” as well, there are just layers and layers of algorithms running on our network of neurons that give the impression of something else (this ties in nicely to a post by castlebravo discussing what, fundamentally, computing is). I think the underlying reasons we have humans in the loop are closely linked to the next sentence:

Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning.

Not only is spoken language ambiguous, but we lack a solid understanding of reasoning, or how our brains work. And we, after all, are the ones programming the algorithms.

In the case of the twitter search example, it struck me that all the human operator was doing was something like this:

if (search_term == 'Big Bird' and current_time is near(election_season) ):
   context = politics
else
   context = 'Sesame Street'

which looks rather algorithmic, when written out as one. Granted, this would be after applying our uniquly qualified abilities to interpret search spikes, right?

if instantaneous_average_occurrence_of('Big Bird') is significantly_greater_than(all_time_average('Big Bird')):
    context = find_local_context('Big Bird')
else
    context = 'Sesame Street'

Of course the find_local_context is a bit of a black box right now, and significantly_greater_than may seem a bit fuzzy, but in both cases you could imagine defining a detailed algorithm for each of those tasks… if you have a good understanding of the thought process a human would go through to solve the problem.

Ultimately, humans are only “good” at deducing context and nuance because of our years of accrued experience. We build a huge database of linked information and store it in the neural fabric of our minds. There isn’t really anything limiting us from giving current digital computers a similar ability, at least at a fundamental level, and theoretically, as our advances in hardware approach the capabilities of an “ideal computer” (one that can simulate all other machines), and our understanding of human psychology and neurology advances, we could simulate a very similar process to the one that goes on in our brains when deducing context and nuance.

The current trend of adding humans into the loop to increase the user friendliness of online algorithms has more to do with our lack of understanding of human thought than with any technical limitations posed by computers.

Are we sacrificing creativity for content?

I decided to become an engineer, before even knowing what “engineering” was, because of a comment my 4th grade art teacher made regarding an art project. I’m pretty sure she meant it as a complement.

The concept of “<insert form of creative expression here> is <insert sensory-related word here> math” is nothing new. From the mathematics of music, to the use of perspective in visual art, there is no escaping the mathematical nature of the universe. All art, no matter the medium, can be thought of as offering a different view of our underlying reality. A different way of looking at the equations, a way at looking at math without even realizing it’s math.

Then why in the engineering curriculum is the emphasis all on the math? Sure, it’s important. Knowing the math can mean the difference between a bridge that collapses1 and one that is a functional art exhibit. Or the difference between a Mars Climate Orbiter that doesn’t orbit and a Mars rover that far exceeds its planned longevity. But it’s still just one view.

If you have ever tried applying the same layering techniques using water colors that are commonly done with oil paints, or tried to write a formal cover letter in iambic tetrameter, you have first hand experience that the choice of the medium has a large impact on the styles and expressive techniques available to the artist. Likewise, the choice of programming language has a similar affect on the capabilities and limitations of the programmer.

see the code

And on the flip side, anyone who can write a formal cover letter, or who is intrigued by writing one in iambic tetrameter, should learn a programming language or two. It’s yet another form of artistic expression, one that can transform the metamedia of the computer into a rich, expressive statement, or produce an epic failure of both form and function.

Footnotes:

1 Though there is a beauty to the mathematics of this particular failure.

iBreakit, iFixit

This past weekend ended up being the weekend of repairs as two lingering problems increased to a point that they could no longer be ignored:

  1. The drain pipe of my bathroom sink completely detached itself from the sink basin.
  2. The aluminum frame on the display of my laptop began peeling away
    from the LCD panel to such an extend that I was concerned continued
    use could result in cracking the front glass.
this can't be good

this can’t be good

I will spare you, dear reader, the gory details of the fix to the first problem (it involved a trip to the hardware store, some plumber’s putty and an old tooth brush) and instead focus on the latter.

As is usually the case with these things, my trusted laptop had long since left its comfortable status of “covered under warranty” when this issue began, and while some googling revealed that I am not the only one to experience this phenomenon it seemed I wasn’t going to get much loving care from Apple and I was fairly certain they would have made some silly claim that they couldn’t do anything for the clearly mechanical problem because I was running Linux on my machine, instead of OS X (full disclosure, they probably would have been justified saying so in this case: one hypothesis of the cause of this problem is excessive heating of the upper left corner which breaks down the glue holding the aluminum backing to the LCD panel. While a number of non-blasphemous, OS-X users clearly had the same problem, my case certainly isn’t helped by the fact that two of the things that don’t always work out-of-the-box on a new Arch Linux install on a MBP are the fan control software and “sleep on lid close”. As a result, there have been a number of times my laptop has overheated after I pulled it out of my bag to discover it had never gone to sleep when I put it in. Plus, I’ve definitely dropped the thing a number of times, as the dents and scratches indicate. Woops. That all being said (warning: tangent alert), I was told of another experience in which an Apple tech rep thought perhaps there was a virus after seeing a syslinux boot screen pop up1.

It would have cost $60 to have the nice folks at the campus bookstore take a look at it, not including any repair costs. The Apple-sanctioned “fix” for this is a full replacement of the display assembly (which seems silly since there’s really nothing wrong with the display), costing around $400-$600, depending on who you talk to (or apparently $1000 if you’re dealing with Australian dollars). Long story short2, I decided I didn’t have much to lose3, and some substantial costs to be saved if I attempted a DIY fix.

Now, let me be very clear: The fact that I happen to have a degree that says “Computer Systems Engineering” in the title has little to no bearing on my skill set and knowledge base required for this repair. Honestly (and those of you who are currently pursuing a CpE degree, please reassure the non-engineers that this is the truth). I say this because it is important that everyone know they are fully capable of making many of their own repairs to there various pieces of technology4. The topic of technological elitism came up last year in a GEDI course, there is concern that as we integrate more and more technology into our lives we are becoming more and more dependent on those who understand how the technology works. My counter argument to that concern is that while there certainly is more to learn and more skills involved in the service and repair of a computer than say, a pen and paper, there are many excellent resources freely available to anyone who takes the initiative to learn about them. One great resource that I used for this particular repair is ifixit.com, a wiki-based repair manual containing step-by-step guides for everything from replacing the door handle on a toaster oven to various repairs of your smartphone. Since I knew if I had any chance of pulling this off I would need to lay the display flat, the guide I found most relevant to the endeavor at hand was Installing MacBook Pro 15″ Unibody Late 2008 and Early 2009 LCD.

Supplies needed5:

required items

  1. The computer to be repaired
  2. mini screwdriver set
  3. Donut, preferably coconut
  4. Coffee
  5. working computer that can access ifixit.com
  6. 5 minute epoxy
  7. T6 Torex screwdriver
  8. A reasonably heavy, flat object
  9. Stress relief

Step-by-step image gallery

  1. Follow the steps in the ifixit guide to remove the display assembly from the body of the laptop.
  2. Reset donut
  3. attempt to apply epoxy in gap between aluminum backing and display, apply pressure, wait for a couple hours
  4. reassemble laptop, power on and use
  5. determine that epoxy is not holding, either due to age, bad application due to limited access to the surface
  6. powerdown and re-disassemble laptop
  7. Using a heat gun to loosen the remaining adhesive around the display casing, gently pry off the aluminum backing completely
  8. This is a perfect opportunity to “pimp your mac” and add some sort of creative graphic behind the apple logo. All I could find was some engineering paper, which turned out somewhat ho-hum.
  9. attempt to remove old adhesive with acetone and/or mechanical force. give up.
  10. Working quickly, (it is 5 minute epoxy, after all) mix up a fresh batch of epoxy, apply intelligently around edge of display casing, choosing places that look least likely to cause problems if it runs over (e.g. avoid iSight camera housing)
  11. Carefully position aluminum backing back on display casing, press firmly and wipe away excess epoxy.
  12. Apply gentle pressure for 5-10 minutes, let cure for another hour or so before reassembly.

    analog media is still relevant

    analog media is still relevant

  13. Re-assemble.
  14. success!

Footnotes:

1 It does make you wonder which dictionary Apple’s marketing department was using when they came up with the “Genius” title. A more accurate title, with 100% more alliteration, would have been “Apple Automaton” since they do an excellent job when a problem is solvable by means of a pre-supplied checklist). Don’t get me wrong, I think Apple’s tech support is generally pretty good, as are their employees. And they are completely within their right to refuse to offer any service or advise to customers who have opted out of the software/hardware-as-one package they provide. But it doesn’t (shouldn’t) take a genius to determine that a different bootloader from Apple’s default is not a virus.

2too late

3aside from possibly rendering my display useless

4 if you have ever replaced a tire on your car, but freak out at the idea of fixing your own computer, briefly consider the consequences of a botched repair job on both. Statistically you are much more likely to die in a horrible, fiery crash as the result of a bad tire replacement than a botched attempt at re-gluing your laptop screen together. Just something to think about.

5wordpress fail: I could not figure out how to tell wordpress to use letters to “number” this ordered list without changing the style sheet for my theme. It could be user error, but I prefer to blame wordpress.

Creative writing, technically

A number of recent conversations, combined with topics-of-interest in both ECE2524 and VTCLI, followed by a chance encounter with an unfamiliar (to me) blogger’s post have all led me to believe I should write a bit about interface design and various tools available to aid in writing workflow.No matter our field, I’m willing to bet we all do some writing. Our writing workflow has undergon some changes since transitioning to the digital era, most notably for my interests is this quote from the aforementioned blog post:

…prior to the computerized era, writers produced a series complete drafts on the way to publications, complete with erasures, annotations, and so on. These are archival gold, since they illuminate the creative process in a way that often reveals the hidden stories behind the books we care about.

The author then introduces a set of scripts a colleague wrote as the response to a question on how to integrate version control into his writing process. The scripts are essentially a wrapper around git, a popular version control system used by software developers and originally designed to meet the needs of a massively distributed collaborative projects, namely the Linux kernel.

What’s really great about this (aside from the clear awesomeness of a sci-fi author collaborating with a techie blogger/podcaster to create a tool that is useful and usable by writers using tools that that are useful and usable by software developers) is that it brings into clear focus some thoughts I wanted to get out last semester about the benefits of writing in a plain text format.

This gets back to one of the recent conversations that also ties into all of this: I was talking to a friend of mine, another grad student in a STEM field, and we were discussing the unfortunate prevalence of the use of MS Word for scientific papers. I don’t want to get into a long discussion of the demerits of MS Word in general, but suffice it to say, if you are interested in producing a professional quality paper, and enjoy the experience of shooting yourself in both foot followed by running a marathon, then by all means, use MS Word. There are also a number of excuses of questionable validity that people use to defend their MS Word usage in scientific writing. The ones that are often brought up often involve the need to collaborate with other authors who are also using MS Word.

Now run that marathon backwards while juggling flaming torches.

I should point out I don’t want to just pick on MS Word here, the same goes for Apple’s Pages or any large software package that tries to be the solution to all your writing needs. I will hence forth refer to this problematic piece of software generically as a “Word Processor”, capitalized to reinforce the idea that I am indeed referring to a number of specific widely used tools.

The conversation led to user interfaces, and the alleged intuitiveness of a modern Word Processor, compared to simple, yet powerful text editor such as emacs or vim. Out of that, my friend discovered a post on a neuroscience blog about user friendly user interfaces that did a nice job putting into writing thoughts that I had been trying to verbalize during our discussion. Namely that the supposed intuitiveness of a Word Processor to “new” users is largely a factor of familiarity rather than any innate intuitiveness to the interface. Once your learn what the symbols mean and where the numerous menu items are that you need to access then it all seems just dandy. Until they go and change the interface on you.

I could and probably should write an entire post on ALL the benefits of adopting a plain-text workflow, and the benefits of using one text editor that you know well for all your writing needs, from scientific papers, to blog, presentations and emails (how many people ever stop to think why it is acceptable and normal to have to learn a new user interface for each different writing task, even though fundamentally the actual work is all the same?). The key benefit I want to highlight here is the one that made it possible for the collaborative effort I mentioned towards the top to take place. By writing in a plain text format, you immediately have the ability to use the enormous wealth of tools that have been developed throughout the history of computing that work with plain text. If our earlier mentioned hero had been doing his writing in a Word Processor, it would have been nearly impossible for his friend to piece together a tool for him that allows him to regain something that was lost with the transition away from a paper workflow, a tool that can “illuminate the creative process in a way that often reveals the hidden stories”, and in many ways goes beyond what was possible or convenient with the paper workflow.

What tools do you use to track your writing process? Do they allow you to go back to any earlier revision, or allow you to easily discover what recent blog’s you had read, what your mood and what the weather was when you wrote a particular passage? Do you use a tool with an interface that is a constant distraction, or one that is hardly noticeable and lets you focus on what actually matters: the words on the page. If not, then why?

I am a Selfish Git: A bit on my teaching philosophy

A common observation I encounter from people who have taken my class is that there is less structure in the assignments than they are used to, and oftentimes less than they would like. A consequence of this is that participants do a lot of searching the web for tidbits on syntax and program idioms for the language du jour, a process that can take time with the wealth of information that is returned by a simple google search. I could track down some research that shows the benefit of this “look it up yourself” approach, and it would all be valid and it is one of the reasons I structure assignments the way I do, but there is another reason. A more selfish reason.

Throughout the term I’ll assign a series of assignments. Details are tweaked each semester but the general outline is something like:

  • read in lines of numbers, one per line, do something with them and write out a result number.
  • read in lines of structured data, do something with them, write out lines of structured data
  • spawn a child process, or two, connect them with a pipe (this year I will probably integrate the “read in lines” idiom into this assignment since I like it so much)

I’ve done each of these myself of course, and tweaked my own solutions from year to year and have found a structure for each that I think works well, is easy to read and is as simple as possible. Often times my solutions use fewer lines of code than some of the solutions I receive, which admittedly make my estimates of how long a particular assignment will be inaccurate. I know some of the assignments end up taking a lot longer than I anticipate for some, and this can be
extremely frustrating, especially since I know everyone’s time is a precious commodity that must be partitioned across other classes and personal time too (you are making time for play, aren’t you?).

I could provide more details in the write-ups. I could say “I tried algorithm X a number of ways: A, B and C, and settled on B because P, Q and R”. It would save those completing the assignments time and it would save me time, because on average the results I’d get back for grading would take up fewer lines of code and be more familiar to me. And that is why I don’t.

If I wrote in the assignment and said “for part A, use method B in conjunction with idiom X and you can complete this part in 3 lines” then I can guarantee you that around 99% of the 60 assignments I received back used method B in conjunction with idiom X in only 3 lines of code. It would be much easier to evaluate: I’d be familiar with the most common errors when using method B in conjunction with idiom X and would have made spotting them quickly a reflexive response.

But I wouldn’t learn a thing.

Let me tell you a secret. Sure, I enjoy seeing others learn and explore new ideas and get excited when they discover they can write something in 10 lines in Python that took them 30 in C. I really do. But that’s not the only reason I teach. I teach because I learn a tremendous amount from the process myself. In fact, all that tweaking I said I’ve done to my solutions? That was done in response to reviewing a diverse (sometimes very diverse) set of solutions to the same problem. Often times I’ll get solutions written in a way I would never have used solving the problem myself, and my first reaction is something like “why does this even work?” And then I’ll look at it a little closer (often times using a fair amount of googling myself to find other similar examples) until I understand the implementation and form some opinion about it. There are plenty of times that I’ll get a solution handed to me that I think is cleaner, more elegant and simpler than my own, and so I’ll incorporate what I learned into my future solutions (and let’s not forget back into my own work as well, a topic for another post). And I’ll learn something new. And that makes me happy.

I really like learning new things (thank goodness for that, given how long I’ve been in school!), and I have learned so much over the past couple years that I’ve been teaching. Possibly more than what I’ve learned in all the classes I’ve taken during my graduate career (different things for sure, which makes it difficult to compare amount, but still, you get the idea).

To be sure, there is a balance, and part of my own learning process has been to find out that sweat spot between unstructured free-style assignments (“Write a program that does X with input B. Ready go!”) and an enumerated list of steps that will gently guide a traveler from an empty text file to a working program that meets all the specs. I think I’ve been zeroing in on the balance, and the feedback I get from blogs as well as the assignments themselves is really helpful.

So keep writing, and keep a healthy does of skepticism regarding my philosophy. And ask questions!

A Comment on “A comment on commenting”

In his post A comment on commenting, leon.pham commented on the annoyance of remembering different commenting syntax in different langauges. It’s true, it is a lot to keep track of. Luckily, if you use a good text editor, such as emacs or vim you can offload the task of remembering what syntax to use to the editor itself. For instance emacs has two commands to aid in creating comments: one to block off a highlighted region in comments, and another to add an end of line comment. Once you learn the command for each (adding an end-of-line comment defaults to M-; in emacs, (where M is the “meta” key, or “Alt” on most keyboars) but of course you could map it to anything you want), that’s it. The editor is generally is smart enough to know what language you are current writing in (and of course you can override it when you need to), and so the universal “add a comment” command that you learn once will always add a comment in the proper syntax for the language you are currently editing! Just another motivation to learn one editor and learn it well!

I will leave it as an exercise for the vim-using reader to post information for the equivilant command in vim!

Git Games and Meta Moments

I had a bit of a meta moment while swimming today. I have a lot of
good moments while swimming, probably because it’s a chance for my
mind to wander. That’s probably a good argument to go more often than
I did this past week (1 out of 6 possible practice days!). But I
digress.

Yesterday I was introduced to a game of sorts to help learn some
concepts used by git. For those of you who don’t know, git is a
versioning control system that has gained quit a bit of popularity
over the past few years, especially in the open source community. I
had been using it myself for my own projects, but mainly at a very
simplistic level.

At one level, a versioning control system (VCS), of which git is
one of many, is a tool to facilitate documenting the changes
of… well, a document. Historically these systems were develped by
software designers to both document changes and provide an easy path
to revert to older versions of source code. Later, similar concepts
were implemented in modern word processors (with limited scope and
power due to the restrictive nature, essentially the traditional
method of tracking edits from the pen and paper days was ported over
to the electronic medium without much change).

One thing that became much more clear to me after trying out the git
game was that while providing logical “snapshots” of a project that
can be used as a return point if somethign goes astray in the future,
git is creating a history of the project, a history that tells a
story. But unlike other histories you may be familiar with, the
history generated by git can be rewritten to change the past.

What had alluded me up until this point was what motivation one might
have to rewrite history. I figured, you make changes, commit them to
the project, those changes get recorded, what more would you need?
Well, it turns out that with the ability to rewrite history, git makes
it incredibly easy to do certain types of edits on your data and
allows an author to use git more as a tool for trying out new,
possibly risky ideas, or take off on a tangent while always providing
a clear path back to a ground point.

The details of what these types of edits are are important, but after
I began writing them up I realized I was losing sight of my original
reason for writing this post! Luckily, I have been using git to track
changes to this document, and created a branch for each of the
examples I thought would be useful. I’m going to leave them out of
the final document for now, but they exist in my history, and since I
will post this project to github, you are free to take a look!

What I thought about while swimming, after the git game helped me
understand why rewriting history could be so useful, and how the
history itself could be used, was that since I’m using git to manage
the files for ECE2524, I could also use it guide future semesters of
the course. Every time I add a new set of lecture notes, or add a new
assignment, I make a commit to a git repo containing all the files
I’ve used so far. That is also recording the order in which topics
are introduced to the class, so I’m generating an outline for the
semester just by nature of using git for your regular garden variety
versioning control.

But I had an hour and a half to occupy my brain while I swam back and
forth, so the wheels kept turning. We use git for class, as those of
you in it know, because it is an important tool for software
development and happens to be a particularly Unix-y tool to boot. The
Unix-i-ness of git is something I will leave for discussion in class
tomorrow (oh, the suspense!). We use git, but it is a complicated
tool to learn, even though what it is doing is quite simple, once you
grok it, something that doesn’t always happen quickly, and never as
quickly as you would like.

But the information tracking ideas from git can be related to the
process we go through in class, which can also be related to the
discussion on working memory vs. long-term memory we had in vtlci. The
process of learning new things involves some experimentation and a lot
of data filtering. We have a lot of information available to us, the
culmination of which can be thought of as the contents of our “working
directory” in git terms. As we individually work through the
information and inspect it through our own lens we commit pieces of it
to our memory, our repository. Though we’re not actively doing it,
there is a log associated with this process. It is not as precise as
something stored on a computer, of course, but looking back on the
past few days we can recall things like “concept X made a lot more
sense to me after I understood hypothesis Y, which became clear after
working through exercise Z.”

What if we were more conscious of this process in class, and even made
an effort to map it more directly to the concept of using git? For
instance, one versioning control concept we’ll start to explore
tomorrow is branching and merging.

A branch can be thought of as a temporary deviation away from the main
story line. In fact, in my first paragraph I went off on a bit of a
tangent about my tendency to let my mind wander while swimming. That
could be thought of as a branch away from the main topic, which (I
promise) is about using git to map the journey we take to learn to use
git. In fact, I switched to a new branch in git when I began writing
those sentences, and then merged them back into the main conversation
when I was done.

What if the class were split up into groups, and each group worked on
one aspect of learning to use git’s branch and merge functionality.
For instances, Group 1 might play the git game, while group 2 might
read about how git represents and references data and what is going on
under the hood. At that point, the collective commit knowledge of the
class will have split into two branches. One branch with more of a
pragmatic grasp of “this is how I do a branch and merge” and another
branch with a better understanding of “this is how a branch and a
merge is implemented by git”. Then, the following week, both groups
would come back together and share what each learned. The two groups
will have just “merged” their knowledge and everyone should have a
better understanding of how to conduct a branch and a merge, and also
what is going on with the underlying data structure when they do one.

Oh, and by the way, when writing that last paragraph I created two new
branches: one named “group1” and it contained the description of what
the hypothetical group1 would do, and another called “group2” which
contained the sentance describing that group’s task. Then I merged the
two back into the master branch, reformated the paragraph and add a
summary. Check out this history on github!

So this whole process got me thinking. Does thinkging about meta
thoughts make it easier or more likely to think about meta thoughts in
the future? And likewise, does it make it easier to draw comparisons
between seemingly unrelated processes, such as learning new ideas, and
software development, when you have a process and a vocabulary to
describe the process of each? I am a strange loop.

Stranger in a Commonplace Land

As I began reading the two introduction essays by Janet Murray and Lev Manovich to The New Media Reader I first was a bit overwhelmed with the length of each.  This immediately made me think of an article that was reverenced in the previous reading, “Is Google Making us stupid?“: was the fact that I initially gawked at so many words and pages a result of my immersion in a world of near-instant informational gratification and 140 character thoughts? The thing is, I have no problems whatsoever reading a 500 page novel, if it’s interesting and indeed there were certainly pieces of each introduction piece that jumped out at me:

All creativity can be understood as taking in the world as a problem. The problem that preoccupies all of the authors in this volume is the pullulating consciousness that is the direct result of 500 years of print culture. – Janet Murray

The concept of defining a unifying model that describes all of creativity is quite appealing to me.  “The world as a problem” seems at the same time both a grossly over simplified, and a perfectly succinct description of creativity  as I see it, and particular to my field of engineering.  Murray than goes on to draw contrasts between “engineers” and “disciplinary humanists” which particularly piqued my interest because I often feel like an outsider looking in when talking to other engineers about humanistic concepts, but also an outsider when trying to explain how I see engineering to “disciplinary humanists”.   The second essay   provided a nugget that helped direct my thoughts on this curious feeling of duplicity

Human-computer interface comes to act as a new form through which all older forms of cultural production are being mediated. – Lev Manovich

Whether we like it or not, this is becoming the reality.  We now get our books, music, movies and even long distance personal interaction mediated by a computer and the interface they provide us.  The thing is, any good engineer knows that if a piece of technology is doing its job, it should be transparent to the user.  While reading both of these essays I found myself thinking: why are we trying to force so much focus on the “new” in “new media”?  Is our doing so an indication that we as engineers still have more work to do to make the current technology transparent (I think we do) or is society so transfixed by “new” technology for some other reason that we are refusing to let it become as transparent as it could be?

Manovich, I think would disagree on that point, at least in the U.S. as one of his arguments for the late start of new media exhibits in the U.S. was in part do to the rapid assimilation of new technology so that it became ubiquitous before we had time to reflect upon its potential impacts.  As I’m writing that I feel myself rethinking my own view, because I don’t want to suggest that we not reflect upon the impact of technology that we now take for granted, in fact I have often felt we need to do much more reflecting, and I agree wholeheartedly that we have adopted some technologies that have drastically changed our day-to-day lives (who plans things in advance any more when you can just text your friends last minute to find out where people are?) that may consequences far extending the superficial sphere of their direct influences (if we don’t plan our days, are we losing our skill at thinking into the future and acting accordingly in general? Are we becoming a species obsessed with living in the moment and unable to live any other way?)

I’m in danger of rambling now, but I now have a better understanding of why I found it difficult to focus on the entirety of both essays.  Everything around each nugget either seemed redundant, overly descriptive, or a distraction from the thought process that had started forming in my head.  If good technology should be transparent to the user, why are we spending so much time worrying about it? And what are the consequences if we don’t?

Re: large scale makefiles

In a recent post  asked about maintaining makefiles for larger projects.  It is certainly true that manually updating lists of dependencies for many (hundreds… thousands even, the Linux kernel is comprised of some 22,000 source files) can become tedious.  Luckily there are tools that will generate Makefiles for you, though really their motivation is to automate the build process on a wide variety of machines and platforms.  If you’ve used Qt for development you have probably been using the ‘qmake’ command.  This is generate a ‘Makefile’ that is then read with a subsequent call to ‘make’.  For more general projects GNU provides Autoconf and Automake.  Along with a couple other programs these are referred to as the GNU Autotools.  If you ever need to install a piece of GNU software from source (if a package isn’t available for your distribution, for instance) chances are it will use Autotools and you will build it with two commands:

$ ./configure
$ make

The first command will generate the Makefile that is then used when you run `make`.  In the process of generating the Makefile, `configure` will check your system for necessary libraries and tools and notify you if something needed is missing.  This is also where you would specify optional build parameters.  To get a list of options run

$ ./configure --help

If you decide you do want to make your project available to the open source community, it’s a good idea to set up the build process using GNU Autotools since folks will be expecting it.

Another option is cmake, which provides similar functionality to GNU Autotools and in addition has the capability to generate project files for a number of IDEs. A quick google search will turn up several commentaries on the merits of the two build systems (and probably several others).

If you do find yourself writing several Makefiles for larger projects (and even if you don’t), be sure to familiarize yourself with the issues raised in the now well-known paper Recursive Make Considered Harmful by Peter Miller.

Academic Privilege: Experiences as a white cisgendered gay male atheist Engineer

Wow.  So after skipping out of PFP early on Monday to attend a talk titled “Why are you Atheists so Angry” by Greta Christina, I was going to write a post about    what angers me about the current state of academia (for those of you not familiar with Greta’s talk, anger in this context is not a bad thing, it is a powerful motivator for social change).  In the process of confirming the url to her blog, a curious random happenstance led me to this post from July, 2011, which in turn lead to here and finally to Of Dogs and Lizards: A Parable of Privilege

I’m not going to rehash the situation and subsequent discussion that lead to the first two links, but if you have time for nothing else, read “Of Dogs and Lizards” immediately after this (or earlier if you find yourself thinking that I shouldn’t be “making a big deal” about this).

This whole sequence of posts was really relevant to me because I had just spent a good deal of time last week discussing the concept of “privileged” with a group of friendly folks.  The parable did a better job of explaining it than I did, I think.

It’s important to understand privilege because it exists at all levels in higher ed, and has a profound effect on the people that don’t have it.  Before I go on, there are many, many kinds of privileged and many of us have some but not all forms.  There’s white privilege, male privilege, straight privilege, cisgendered privilege, religious (in this country, Christian) privilege and so on and so forth.  Notice I’m not talking about the privilege that comes with having a lot of money (although the previously mentioned kinds of privileged have a huge effect on whether or not someone achieves financial privilege).  I’m talking about unearned privileges.  Privileges granted just by being born a certain way, or adopting a certain religion.

(Electrical) Engineering is a male dominated field, and while there have been many discussions as to why this is (and how to change it), one large reason is that it is not perceived as an inviting environment to women.

As a gay male, I tend to be sensitive to sexist comments made by professors, colleagues, even my adviser.  Not for the same reason a woman would be sensitive to them, although I can empathize, but because they make me feel like an outlier, like I don’t belong.  I really don’t understand, why would we “hire some dancing girls” to celebrate a successful paper submission?  And why would I pick a major based on the ability “to meet women”? And why is talking about how engineers can “pick up girls” such a popular topic (here’s a tip, maybe if you started thinking of women as human beings (editors note: I originally had written “human beans”, which might be the case as well)  and not some kind of alien species that you had to “trick” into talking to you, you’d be more successful).

I wish I could remember some more specific examples from the classroom.  All I can remember is numerous times feeling uncomfortable, both for myself, and for the few women around, after a professor (likely unknowingly) made a sexist comment in class.

Now, if you have read the parable yet, you’ll understand that I am not accusing the people making these comments of being bad people. They’re just unaware.  They legitimately do not understand why the comments they are making might be offensive to some people.  Because they have privilege.  It’s not a bad thing, or a good thing, it’s just the state of the world that we live in.  But because they have privilege, they also have the privilege of ignoring the people who raise concerns.

I have had good friends suggest that maybe I was just “an angsty gay boy” for feeling uncomfortable about the pervasive heteronormativity I experience in Engineering.  I have been told by colleagues, after raising concern about a sexist remark made by a professor, that “it’s not a big deal, he didn’t mean it that way, don’t worry about it”.  Well, I am worried about it.  And I’m also worried when people tell me not to worry about it.  As you know by now from reading the referenced posts, these responses are a nice way of saying “shut up”.  Subconciously that is usually often done because maybe they see some truth in what I’m saying but don’t want to admit it because they’re uncomfortable facing the fact that they have privilege, or maybe it’s to try and preserve the privilege that they have.

Academe should be an environment that is welcoming and inclusive to ALL people, and I think most of us feel that way.  So please, the next time someone tells you that a comment made them feel uncomfortable, listen to them.  And understand that it might take a while for you to understand WHY a comment that sounds perfectly reasonable to you might make someone else feel uncomfortable.

What privileged to you enjoy that you might not be aware of? And how might they lead you to say things that may make others feel uncomfortable?

What unearned privileges do you *not* have, and have you ever been made to feel uncomfortable, or unsafe as a result?