And Now You’re an Anstronaut: Open Source Treks to The Final Frontier

There have been a couple of blog posts recently referencing the recent switch NASA made from Windows to Debian 6, a GNU/Linux distribution, as the OS running on the laptops abord the International Space Station. It’s worth noting that Linux is no stranger to the ISS, as it has been a part of ground control operations since the beginning.

The reasons for the space-side switch are quoted as

…we needed an operating system that was stable and reliable — one that would give us in-house control. So if we needed to patch, adjust, or adapt, we could.

This is satisfying to many Open Source/Linux fans in it’s own right: a collaborative open source project has once again proved itself more stable and reliable for the (relatively) extrodinary conditions of low Earth orbit than a product produced by a major software giant. Plus one for open source collaboration and peer networks!

But theres another reason to be excited. And it’s a reason that would not necessarily applied (mostly) to, say, Apple fanatics had NASA decided to switch to OS X instead of Debian. And that reason has to do with the collaborative nature of the open source movement, codified in many open source licenses under which the software is released. Linux, and the GNU tools, which together make up a fully functional operating system, are released under the GNU General Public License. Unlike many licenses used for commersial software, the GPL esures that software licenses under its terms remains free for users to use,modify and redistribute. There are certainly some strong criticisms and ongoing debate regarding some key aspects of the GPL, especially version 3, the point of contention mostly lies in what is popularly called the “viral” effect of the license: that modified and derived work must also be released under the same license. The GPL might not be appropriate for every developer and every project, but it codifies the spirit of open source software in a way that is agreeable with many developers and users.

So what does this all mean in terms of NASA’s move? We already know that they chose GNU/Linux for its reliability and stability over alternatives, but that doesn’t mean it’s completely bug free, or will always work perfectly with every piece of hardware, which after all is another reason for the switch: no OS will be completely bug free or always work with all hardware, but at least Debian gives NASA the flexibility of making improvements themselves. And there in lies the reason for excitement. While there is no requirement that NASA redistribute their own modified versions of the software, there is no reason to assume they wouldn’t in most cases, and if they do, it will be redistributed under the same license. It’s certainly realistic to expect they will be directing a lot of attention to making the Linux kernel, and the GNU tools packaged with Debian even more stable and more reliable, and those improvements will make their way back into the general distributions that we all use. This means better hardware support for all GNU/Linux users in the future!

And of course it works both ways. Any bug fixes you make and redistribute may make their way back to the ISS, transforming humanity’s thirst for exploring “the final frontier” into a truly collaborative and global endeavor.

ToDo: Make ECE2524 Obsolete

Why would I want to eliminate the course that I’ve been teaching the past four semesters, that I have put so many hours into to update content, create new assignments, written (and re-written each semester… another topic altogether) a set of scripts to facilitate reviewing stacks of programming assignments and generally had a great time with?

Well, because I don’t think it should be a separate course to begin with. As many have noted, and I have agreed, ECE2524 in many respects is a kind of a “catch-all” course for all those really important topics and tools (version control, anyone?) that just don’t get covered anywhere else. It is also officially (but not rigorously enforced in the form of a prereq) to be an introduction to more advanced software engineering courses, so it has the general feel of a programming course.

I think programming (and *nix OS usage and philosophy) is too important to delegate off to a 2 credit course and treat separately from the rest of the engineering curriculum, an idea that was solidified after reading an excerpt from Mindstorms by Seymour Papert.

I began to see how children who had learned to program computers could use very concrete computer models to think about thinking and to learn about learning and in doing so, enhance their powers as psychologists and as epistemologists.

Papert is a strong advocate to introducing computer programming to children at an early age and using it as a tool to learn other disciplines

The metaphor of computer as mathematics-speaking entity puts the learner in a qualitatively new kind of relationship to an important domain of knowledge. Even the best of educational television is limited to offering quantitative improvements in the kinds of learning that existed without it… By contrast, when a child learns to program, the process of learning is transformed. It becomes more active and self-directed. In particular, the knowledge is acquired for a recognizable personal purpose.

It goes without saying that a solid understanding of math is crucial for any of the STEM fields, but computers and programming can also encourage engagement with other fields as well, though that is not the focus of this post.

Along with being a useful skill to have, programming teaches a systematic way of thinking about a problem, and crucially shifts the model of learning from one that embodies a “got it” and “got it wrong” binary state to one that encourages the question “how do I fix it?”. As Papert notes, and I can personally attest, when writing a program you never get it right the first time. Becoming a good programmer means becoming an expert at tracking down and fixing bugs.

If this way of looking at intellectual products were generalized to how the larger culture thinks about knowledge and its acquisition, we all might be less intimidated by our fears of “being wrong.”

Some strong arguments for the symbiosis of programming and learning valuable thinking skills at an early age. But the benefits don’t disappear at the college level, especially in a field such as engineering in which learning programming for the sake of programming is a valuable skill (there are several required classes on the subject, so you know it must be important. Slight sarcasm, but it’s true, regardless of how cynical we agree to be about the way classes are structured and the curriculum is built for us). If programming can help engage with learning mathematics, and as a side effect get us thinking about how we think, and shift our view of learning to a more constructive one, then can’t we get at least the same positive affects if we apply it to more advanced concepts and ideas? It doesn’t hurt that a good chunk of engineering is mathematics anyway.

The wheels really started turning after the first day of guest-lecturing for Signals & Systems. Here’s a course that is a lot of math, but critically foundational for learning how to learn about how the world works. That may seem a little embellished, especially for those not familiar with the field (Signals & Systems crash course: a system is anything that has an input signal and produces and output signal, e.g. a car (input is gas/break, output is speed), a heart beat (input is electrical signal transmitted along nerves, output is muscle contraction or blood flow), the planet (so many systems, but treating atmospheric concentrations of CO2 and other gases as an input and the average global temperature would be one example of a system we would be interested in studying)). Signals & Systems provides a set of tools for exploring the input/output relationships of… anything.

So why is it taught from a set of slides?

What better way to really engage with and understand the theory than USE it? Now most educational budgets wouldn’t be able to cover the costs if everyone wanted to learn the input/output behavior of their own personal communications satellite, but the beauty of Signals & Systems, and the mathematical representations that it embodies, is that everything can be simulated on a computer. From the velocity of a car, to blood flow caused by a beating heart, to the motion of the planets and beyond.

I envision a Signals & Systems course that is mostly programming. People will argue that the programming aspect of the material is just the “practical implementation”, and while that’s important, the theory is critical. Yes, the theory is what helps us develop a generalized insight into different ways of representing different types of systems and what allows us to do a good deal of design in a simulated environment with greatly reduced risks, especially when say, designing new flight controls for a commercial jet.

But I think the theory can be taught alongside the programming for a much richer experience than is obtained by following a set of slides. You want to understand how the Laplace transform works? What better way than to implement it on a computer. I guarantee you, if you have to write a program that calculates the Laplace transform for an arbitrary input signal, by the time you’re done with the debugging, you’re going to have a pretty good understanding of whats going on, not to mention a slew of other important experiences (how do you solve an integral on a computer anyway?).

Talking about the differences between continuous time systems and discrete time systems is taken to a whole new level when you start trying to simulate a continues-time system on a computer, very much a discrete-time system. How do you even do that? Is it sufficient to just use a really really small time step?

So yes, I think the best case scenario would be one in which ECE2524: Intro to Unix for Engineers is obsolete1. Not because the topics we cover are unimportant, quite the contrary, they are so important that they should be providing a framework for learning engineering.

Footnotes:

1 I’ve focused primarily on the programming aspect of ECE2524 here, but those of you who know me and have taken the course with me know that the Unix philosophy is a big part of it as well. Integrating the programming aspects into other coursework would of course not address that. I’m sure, with a little thought we all can think up a clever way of introducing the *nix philosophy and generally the whole concept of thinking about a philosophy when thinking about engineering, and what that even means, with every other course. Because well, it should be an integral part of everything else we learn.

How will we build a Third System of education?

I have recently been reading about, as Mike Gancarz puts it in Linux and the Unix Philosophy, “The Three Systems of Man”. This is, to my understanding, a fairly well documented and often-observed concept in software design, possibly first referenced by Frederick Brooks in The Mythical Man-Month when he coined “the second system effect“. Gancarz seems to take the concept further, generalizing it to any system built by humans.

Man has the capacity to build only three systems. No mater how hard he may try, no matter how many hours, months, or years for which he may struggle, he eventually realizes that he is incapable of anything more. He simply cannot build a fourth. To believe otherwise is self-delusion.

The First System

Fueled by need, constricted by deadlines, a first system is born out of a creative spark. It’s quick, often dirty, but gets the job done well. Importantly it inspires others with the possibilities it opens up. The “what if”s elicited by a First System lead to…

The Second System

Encouraged and inspired by the success of the First System more people want to get on bored and offer their own contributions and add features they deem necessary. Committees are formed to organize and delegate. Everyone offers their expertise and everyone believes they have expertise, even when they don’t. The Second System has a marketing team devoted to selling its many features to eagerly awaiting customers, and to appeal to the widest possible customer base nearly any feature that is thought up is added. In reality, most users end up only using a small fraction of available features of The Second System, the rest just get in the way. Despite enjoying commercial success The Second System is usually the worse of the three. By trying to appease everyone (and more often then not, by not understanding , the committees in charge have created a mediocre experience. The unnecessary features add so much complexity that bugs are many and fixes take a considerable amount of effort. After some time, some users (and developers) start to recognize The Second System for what it is: bloatware.

The Third System

The Third System is built by people who have been burned by the Second System

Eventually enough people grow frustrated by the inefficiencies and bloat of The Second System that they rebel against it. They set out to create a new system that contains the essential features and lessons learned in the First and Second Systems, but leave out the crud that accumulated by the Second System. The construction of a Third System comes about either as a result of observed need, or as an act of rebellion against the Second System. Third Systems challenge the status quo set by Second Systems, and as such there is a natural tendency to those invested in The Second System to criticize, distrust and fear The Third System and those who advocate for it.

The Interesting History of Unix

Progression from First to Second to Third system always happens in that order, but sometimes a Third System can reset back to First, as is the case with Unix. While Gancarz argues that current commercial Unix is a Second System, the original Unix created by a handful of people at Bell Labs was a Third System. It grew out of the Multics project which was the Second System solution spun from the excitement of the Compatible Time-Sharing System (CTSS), arguably the first timesharing system ever deployed. Multics suffered so much from second-system syndrome that it collapsed under its own weight.

Linux is both a Third and Second system: while it shares many properties of commercial Unix that are Second System-like, it is under active development by people who came aboard as rebels of Unix and who put every effort into eliminating the Second System cruft associated with its commercial cousin.

Is our current Educational Complex a Second System?

I see many signs of second-system effect in our current educational system. Designed and controlled by committee, constructed to meed the needs of a large audience while failing to meet the individual needs of many (most?). Solutions to visible problems are also determined by committee and patches to solutions serve to cover up symptoms. Addressing the underlying causes would require asking some very difficult questions about the nature of the system itself. Something that those invested in it are not rushing to do.

Building a Third System

What would a Linux-esq approach to education look like? What are the bits that we would like to keep? What are the ugliest pieces that should be discarded first? And how will we weave it all together into a functional, useful system?

Digital amplifier: the tweet heard ’round the world

Sometimes, in the face based modern world we live in, it feels like we’re living in the future. But all it takes is the watchful eye of the Internet, and specifically, its uncanny, sometimes disruptive tendency to amplify lurking social ills to remind us we are still very much in the past.

Last week, PyCon nearly ended quietly, without causing much of ruckus, as all good annual gatherings of open source software developers strive for. The organizers of PyCon understand the importance of diversity in the technology field, a currently white male dominated field, have worked hard to create an environment that is open and welcome to everyone, and in case there’s any confusion, they have a published code of conduct.

So when Adria Richards grew frustrated with two men making lewd jokes behind her at a closing talk she snapped their picture and tweeted

Moments later, PyCon staff saw her tweet, responded and escorted the two men into the hallway. The situation was resolved with minimal disruption. It would have all been finished, and we wouldn’t be still talking about it now if it hadn’t been for the first inappropriate response to the, up until this point, fairly minor ordeal.

The company for which the two men were working for, and representing at PyCon, made the decision to fire one of them. The company sited multiple contributing factors, not just the joke, but the timing was extreamly poor on their part if they really didn’t want to connect the termination to the joke incident.

And then the Internet exploded.

Adria Richards got a man fired. A man who had three children to feed. The Internet was not pleased. And to show its displeasure it sent Adria death threats, rape threats, racial epithets and suggested that she consider suicide. A group of hackers, some claiming to be Anonymous, initiated a series of DDOS attacks on her employer’s servers demanding that they fire her for retribution.

And because SendGrid, the company employing Adria, had no spine, they gave into the mob and publicly fired her. It was the easy thing to do, after all.

Justice served?

Bloggers the tech world over chimed in with their support or critique. Many asking whether she should have posted the photo of the two men and how she should have handled the incident differently, in a more lady-like fashion. Many jumped on a post by Amanda Blum that proved Richards “acted out” like this on more than one occasion, though Blum mentioned that she does not like Adria personally, and criticized her actions at PyCon, she did bring up the point that

Within 24 hours, Adria was being attacked with the vile words people use only when attacking women.

And this is the real issue, I think. And the bashful excuses from members of the tech community (both men and women) that “this is just how tech conferences are”, and “she should have a thicker skin”. The voices that suggest she shouldn’t have responded because the lewd comments were likely not directed at her seem to miss the point completely.

But at least we’re talking about it. Soon after the event the organizers of PyCon put the Code of Conduct up on GitHub, a popular open source hosting service, and invited members of the community to collaborate on changes in light of recent events. The community responded by adding language to the policy that prohibits public shaming. This is not unreasonable, and probably desirable and consistent with a “innocent until proven guilty” mentality. But unless a clear, easy communication path is given to report incidents as quickly and efficiently as twitter, in a private manner is provided, this could also be seen as a measure to silence others who may feel the need to speak out about poor conduct, but for whatever reason (and there are many) do not feel comfortable addressing the individuals directly.

The issue is not limited to sex or race, it is a larger one. Folks who are empowered by the status quo, whether they’re conscious of their priveledge or not, do not like the status quo challenged. Christie Koehler blogged about the incident from that perspective

It’s not easy because the tactics available to those who oppose institutional oppression are limited and judged by the very institution that is oppressive.

Those who benefit from the status quo, whether they realize it or not, have a vested interested in maintaining that status quo. That means working to ensure that any threat to it is rendered ineffectual. The best way to do that is to discredit the person who generated the threat. If the threat is the reporting of a transgressive act that the dominant social class enjoys with impunity, then the reaction is to attack the person who reported it.

And when it comes down to it, the vast majority of the negative backlash against Richards and her company (and none that I’ve heard of towards PlayHaven, the company that actually fired the male developer and started the whole fiasco) comes down to defending the status quo with a passion. People will fight for their place of privilege. They will fight hard and they will fight dirty.

And the very sordid nature of their fight will continue to prove unequivocally why we need to keep challenging the status quo until we create a world that is welcoming to all.

More reading:
Why Asking What Adria Richards Could have done different is the wronge question
Adria Richards did Everything Exactly Right

About Time: Idioms About Time

TL;DR:

In the comments below please post, in your native language, or a non-English language in which you are fluent:

  1. how you would ask someone what time it is, and the literal word-for-word translation into English
  2. how you would ask someone where you are and the literal word-for-word translation into English

I wonder if I should stop being surprised when topics I’ve discussed separately with separate people all start to relate. On Monday I talked about idioms in ECE2524 and made some comparisons between idioms in programming languages to idioms in spoken languages. As I thought about examples of idioms I noticed there were quite a lot about time:

  • on time
  • about time
  • in time
  • next time

just to name a few (I’ve somewhat intentionally left out more complex examples like “a watched pot never boils”, “better late than never”, etc.). Today in vtclis13 we discussed McCloud’s “Time Frames”, a comic that explores the various ways time and motion are represented in comics. Inevitably we talked about the different ways of talking about and perceiving time, from the relativistic physical properties of the dimension, to our own personal perception of the passage of time, and how in both cases the rate of time can change based on the environment. Time is such a funny thing. We often talk about it as if we know what we’re talking about and we take various metrics for granted: In the U.S. what is it about taking 16 trips around the sun that makes someone ready to drive a car? 2 more orbits and we’re deemed ready to vote, and after a total of 21 orbits, after we have been on the Earth as it has traveled through about 19,740,000,000 kilometers relative to the sun, we are legally able to purchase alcohol.

But if Einstein’s forays into relativity have taught us anything it is that nothing about time is absolute as we generally have an intuition for. And so I became curious about the idioms we use to talk about time and how they differ from culture to culture, language to language. Dr. C put my thought into a question: “Are idioms about time especially diverse?”. And so, through this little survey, I would like to explore that question by gathering some time idioms in the comments section, please refer back to the first paragraph for specific instructions!

Humans in the loop

Today’s hot article in the local twitterverse is a New York Times piece called Algorithms Get a Human Hand in Steering Web. I discovered it from a tweet by @GardnerCampell, also a beautiful retweet by @mzphyz:

Above all: Algorithms are human constructs, embodiments of our thought & will.

Which really sums up this entire post, so for the TL;DR crowd, you can stop reading right now!

The article mentions a number of examples of human-in-the-loop algorithms currently being employed on the internet, notably in Twitter’s search results and Google’s direct information blurbs (not sure what they call them, those little in-line sub-pages that show up for certain search terms, like a(ny) specific U.S. president, for example).

What I found interesting was that the tone of the article seemed to suggest that the tasks humans were doing as part of the human-algorithm hybrid system were somehow fundamentally unique to our own abilities, something that computers just could not do. I’m not sure if this was then indented tone, but either way, I found myself disagreeing.

Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them.

True, but I would argue that our own brains are “literal-minded” as well, there are just layers and layers of algorithms running on our network of neurons that give the impression of something else (this ties in nicely to a post by castlebravo discussing what, fundamentally, computing is). I think the underlying reasons we have humans in the loop are closely linked to the next sentence:

Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning.

Not only is spoken language ambiguous, but we lack a solid understanding of reasoning, or how our brains work. And we, after all, are the ones programming the algorithms.

In the case of the twitter search example, it struck me that all the human operator was doing was something like this:

if (search_term == 'Big Bird' and current_time is near(election_season) ):
   context = politics
else
   context = 'Sesame Street'

which looks rather algorithmic, when written out as one. Granted, this would be after applying our uniquly qualified abilities to interpret search spikes, right?

if instantaneous_average_occurrence_of('Big Bird') is significantly_greater_than(all_time_average('Big Bird')):
    context = find_local_context('Big Bird')
else
    context = 'Sesame Street'

Of course the find_local_context is a bit of a black box right now, and significantly_greater_than may seem a bit fuzzy, but in both cases you could imagine defining a detailed algorithm for each of those tasks… if you have a good understanding of the thought process a human would go through to solve the problem.

Ultimately, humans are only “good” at deducing context and nuance because of our years of accrued experience. We build a huge database of linked information and store it in the neural fabric of our minds. There isn’t really anything limiting us from giving current digital computers a similar ability, at least at a fundamental level, and theoretically, as our advances in hardware approach the capabilities of an “ideal computer” (one that can simulate all other machines), and our understanding of human psychology and neurology advances, we could simulate a very similar process to the one that goes on in our brains when deducing context and nuance.

The current trend of adding humans into the loop to increase the user friendliness of online algorithms has more to do with our lack of understanding of human thought than with any technical limitations posed by computers.

Are we sacrificing creativity for content?

I decided to become an engineer, before even knowing what “engineering” was, because of a comment my 4th grade art teacher made regarding an art project. I’m pretty sure she meant it as a complement.

The concept of “<insert form of creative expression here> is <insert sensory-related word here> math” is nothing new. From the mathematics of music, to the use of perspective in visual art, there is no escaping the mathematical nature of the universe. All art, no matter the medium, can be thought of as offering a different view of our underlying reality. A different way of looking at the equations, a way at looking at math without even realizing it’s math.

Then why in the engineering curriculum is the emphasis all on the math? Sure, it’s important. Knowing the math can mean the difference between a bridge that collapses1 and one that is a functional art exhibit. Or the difference between a Mars Climate Orbiter that doesn’t orbit and a Mars rover that far exceeds its planned longevity. But it’s still just one view.

If you have ever tried applying the same layering techniques using water colors that are commonly done with oil paints, or tried to write a formal cover letter in iambic tetrameter, you have first hand experience that the choice of the medium has a large impact on the styles and expressive techniques available to the artist. Likewise, the choice of programming language has a similar affect on the capabilities and limitations of the programmer.

see the code

And on the flip side, anyone who can write a formal cover letter, or who is intrigued by writing one in iambic tetrameter, should learn a programming language or two. It’s yet another form of artistic expression, one that can transform the metamedia of the computer into a rich, expressive statement, or produce an epic failure of both form and function.

Footnotes:

1 Though there is a beauty to the mathematics of this particular failure.

iBreakit, iFixit

This past weekend ended up being the weekend of repairs as two lingering problems increased to a point that they could no longer be ignored:

  1. The drain pipe of my bathroom sink completely detached itself from the sink basin.
  2. The aluminum frame on the display of my laptop began peeling away
    from the LCD panel to such an extend that I was concerned continued
    use could result in cracking the front glass.
this can't be good

this can’t be good

I will spare you, dear reader, the gory details of the fix to the first problem (it involved a trip to the hardware store, some plumber’s putty and an old tooth brush) and instead focus on the latter.

As is usually the case with these things, my trusted laptop had long since left its comfortable status of “covered under warranty” when this issue began, and while some googling revealed that I am not the only one to experience this phenomenon it seemed I wasn’t going to get much loving care from Apple and I was fairly certain they would have made some silly claim that they couldn’t do anything for the clearly mechanical problem because I was running Linux on my machine, instead of OS X (full disclosure, they probably would have been justified saying so in this case: one hypothesis of the cause of this problem is excessive heating of the upper left corner which breaks down the glue holding the aluminum backing to the LCD panel. While a number of non-blasphemous, OS-X users clearly had the same problem, my case certainly isn’t helped by the fact that two of the things that don’t always work out-of-the-box on a new Arch Linux install on a MBP are the fan control software and “sleep on lid close”. As a result, there have been a number of times my laptop has overheated after I pulled it out of my bag to discover it had never gone to sleep when I put it in. Plus, I’ve definitely dropped the thing a number of times, as the dents and scratches indicate. Woops. That all being said (warning: tangent alert), I was told of another experience in which an Apple tech rep thought perhaps there was a virus after seeing a syslinux boot screen pop up1.

It would have cost $60 to have the nice folks at the campus bookstore take a look at it, not including any repair costs. The Apple-sanctioned “fix” for this is a full replacement of the display assembly (which seems silly since there’s really nothing wrong with the display), costing around $400-$600, depending on who you talk to (or apparently $1000 if you’re dealing with Australian dollars). Long story short2, I decided I didn’t have much to lose3, and some substantial costs to be saved if I attempted a DIY fix.

Now, let me be very clear: The fact that I happen to have a degree that says “Computer Systems Engineering” in the title has little to no bearing on my skill set and knowledge base required for this repair. Honestly (and those of you who are currently pursuing a CpE degree, please reassure the non-engineers that this is the truth). I say this because it is important that everyone know they are fully capable of making many of their own repairs to there various pieces of technology4. The topic of technological elitism came up last year in a GEDI course, there is concern that as we integrate more and more technology into our lives we are becoming more and more dependent on those who understand how the technology works. My counter argument to that concern is that while there certainly is more to learn and more skills involved in the service and repair of a computer than say, a pen and paper, there are many excellent resources freely available to anyone who takes the initiative to learn about them. One great resource that I used for this particular repair is ifixit.com, a wiki-based repair manual containing step-by-step guides for everything from replacing the door handle on a toaster oven to various repairs of your smartphone. Since I knew if I had any chance of pulling this off I would need to lay the display flat, the guide I found most relevant to the endeavor at hand was Installing MacBook Pro 15″ Unibody Late 2008 and Early 2009 LCD.

Supplies needed5:

required items

  1. The computer to be repaired
  2. mini screwdriver set
  3. Donut, preferably coconut
  4. Coffee
  5. working computer that can access ifixit.com
  6. 5 minute epoxy
  7. T6 Torex screwdriver
  8. A reasonably heavy, flat object
  9. Stress relief

Step-by-step image gallery

  1. Follow the steps in the ifixit guide to remove the display assembly from the body of the laptop.
  2. Reset donut
  3. attempt to apply epoxy in gap between aluminum backing and display, apply pressure, wait for a couple hours
  4. reassemble laptop, power on and use
  5. determine that epoxy is not holding, either due to age, bad application due to limited access to the surface
  6. powerdown and re-disassemble laptop
  7. Using a heat gun to loosen the remaining adhesive around the display casing, gently pry off the aluminum backing completely
  8. This is a perfect opportunity to “pimp your mac” and add some sort of creative graphic behind the apple logo. All I could find was some engineering paper, which turned out somewhat ho-hum.
  9. attempt to remove old adhesive with acetone and/or mechanical force. give up.
  10. Working quickly, (it is 5 minute epoxy, after all) mix up a fresh batch of epoxy, apply intelligently around edge of display casing, choosing places that look least likely to cause problems if it runs over (e.g. avoid iSight camera housing)
  11. Carefully position aluminum backing back on display casing, press firmly and wipe away excess epoxy.
  12. Apply gentle pressure for 5-10 minutes, let cure for another hour or so before reassembly.

    analog media is still relevant

    analog media is still relevant

  13. Re-assemble.
  14. success!

Footnotes:

1 It does make you wonder which dictionary Apple’s marketing department was using when they came up with the “Genius” title. A more accurate title, with 100% more alliteration, would have been “Apple Automaton” since they do an excellent job when a problem is solvable by means of a pre-supplied checklist). Don’t get me wrong, I think Apple’s tech support is generally pretty good, as are their employees. And they are completely within their right to refuse to offer any service or advise to customers who have opted out of the software/hardware-as-one package they provide. But it doesn’t (shouldn’t) take a genius to determine that a different bootloader from Apple’s default is not a virus.

2too late

3aside from possibly rendering my display useless

4 if you have ever replaced a tire on your car, but freak out at the idea of fixing your own computer, briefly consider the consequences of a botched repair job on both. Statistically you are much more likely to die in a horrible, fiery crash as the result of a bad tire replacement than a botched attempt at re-gluing your laptop screen together. Just something to think about.

5wordpress fail: I could not figure out how to tell wordpress to use letters to “number” this ordered list without changing the style sheet for my theme. It could be user error, but I prefer to blame wordpress.

Creative writing, technically

A number of recent conversations, combined with topics-of-interest in both ECE2524 and VTCLI, followed by a chance encounter with an unfamiliar (to me) blogger’s post have all led me to believe I should write a bit about interface design and various tools available to aid in writing workflow.No matter our field, I’m willing to bet we all do some writing. Our writing workflow has undergon some changes since transitioning to the digital era, most notably for my interests is this quote from the aforementioned blog post:

…prior to the computerized era, writers produced a series complete drafts on the way to publications, complete with erasures, annotations, and so on. These are archival gold, since they illuminate the creative process in a way that often reveals the hidden stories behind the books we care about.

The author then introduces a set of scripts a colleague wrote as the response to a question on how to integrate version control into his writing process. The scripts are essentially a wrapper around git, a popular version control system used by software developers and originally designed to meet the needs of a massively distributed collaborative projects, namely the Linux kernel.

What’s really great about this (aside from the clear awesomeness of a sci-fi author collaborating with a techie blogger/podcaster to create a tool that is useful and usable by writers using tools that that are useful and usable by software developers) is that it brings into clear focus some thoughts I wanted to get out last semester about the benefits of writing in a plain text format.

This gets back to one of the recent conversations that also ties into all of this: I was talking to a friend of mine, another grad student in a STEM field, and we were discussing the unfortunate prevalence of the use of MS Word for scientific papers. I don’t want to get into a long discussion of the demerits of MS Word in general, but suffice it to say, if you are interested in producing a professional quality paper, and enjoy the experience of shooting yourself in both foot followed by running a marathon, then by all means, use MS Word. There are also a number of excuses of questionable validity that people use to defend their MS Word usage in scientific writing. The ones that are often brought up often involve the need to collaborate with other authors who are also using MS Word.

Now run that marathon backwards while juggling flaming torches.

I should point out I don’t want to just pick on MS Word here, the same goes for Apple’s Pages or any large software package that tries to be the solution to all your writing needs. I will hence forth refer to this problematic piece of software generically as a “Word Processor”, capitalized to reinforce the idea that I am indeed referring to a number of specific widely used tools.

The conversation led to user interfaces, and the alleged intuitiveness of a modern Word Processor, compared to simple, yet powerful text editor such as emacs or vim. Out of that, my friend discovered a post on a neuroscience blog about user friendly user interfaces that did a nice job putting into writing thoughts that I had been trying to verbalize during our discussion. Namely that the supposed intuitiveness of a Word Processor to “new” users is largely a factor of familiarity rather than any innate intuitiveness to the interface. Once your learn what the symbols mean and where the numerous menu items are that you need to access then it all seems just dandy. Until they go and change the interface on you.

I could and probably should write an entire post on ALL the benefits of adopting a plain-text workflow, and the benefits of using one text editor that you know well for all your writing needs, from scientific papers, to blog, presentations and emails (how many people ever stop to think why it is acceptable and normal to have to learn a new user interface for each different writing task, even though fundamentally the actual work is all the same?). The key benefit I want to highlight here is the one that made it possible for the collaborative effort I mentioned towards the top to take place. By writing in a plain text format, you immediately have the ability to use the enormous wealth of tools that have been developed throughout the history of computing that work with plain text. If our earlier mentioned hero had been doing his writing in a Word Processor, it would have been nearly impossible for his friend to piece together a tool for him that allows him to regain something that was lost with the transition away from a paper workflow, a tool that can “illuminate the creative process in a way that often reveals the hidden stories”, and in many ways goes beyond what was possible or convenient with the paper workflow.

What tools do you use to track your writing process? Do they allow you to go back to any earlier revision, or allow you to easily discover what recent blog’s you had read, what your mood and what the weather was when you wrote a particular passage? Do you use a tool with an interface that is a constant distraction, or one that is hardly noticeable and lets you focus on what actually matters: the words on the page. If not, then why?

I am a Selfish Git: A bit on my teaching philosophy

A common observation I encounter from people who have taken my class is that there is less structure in the assignments than they are used to, and oftentimes less than they would like. A consequence of this is that participants do a lot of searching the web for tidbits on syntax and program idioms for the language du jour, a process that can take time with the wealth of information that is returned by a simple google search. I could track down some research that shows the benefit of this “look it up yourself” approach, and it would all be valid and it is one of the reasons I structure assignments the way I do, but there is another reason. A more selfish reason.

Throughout the term I’ll assign a series of assignments. Details are tweaked each semester but the general outline is something like:

  • read in lines of numbers, one per line, do something with them and write out a result number.
  • read in lines of structured data, do something with them, write out lines of structured data
  • spawn a child process, or two, connect them with a pipe (this year I will probably integrate the “read in lines” idiom into this assignment since I like it so much)

I’ve done each of these myself of course, and tweaked my own solutions from year to year and have found a structure for each that I think works well, is easy to read and is as simple as possible. Often times my solutions use fewer lines of code than some of the solutions I receive, which admittedly make my estimates of how long a particular assignment will be inaccurate. I know some of the assignments end up taking a lot longer than I anticipate for some, and this can be
extremely frustrating, especially since I know everyone’s time is a precious commodity that must be partitioned across other classes and personal time too (you are making time for play, aren’t you?).

I could provide more details in the write-ups. I could say “I tried algorithm X a number of ways: A, B and C, and settled on B because P, Q and R”. It would save those completing the assignments time and it would save me time, because on average the results I’d get back for grading would take up fewer lines of code and be more familiar to me. And that is why I don’t.

If I wrote in the assignment and said “for part A, use method B in conjunction with idiom X and you can complete this part in 3 lines” then I can guarantee you that around 99% of the 60 assignments I received back used method B in conjunction with idiom X in only 3 lines of code. It would be much easier to evaluate: I’d be familiar with the most common errors when using method B in conjunction with idiom X and would have made spotting them quickly a reflexive response.

But I wouldn’t learn a thing.

Let me tell you a secret. Sure, I enjoy seeing others learn and explore new ideas and get excited when they discover they can write something in 10 lines in Python that took them 30 in C. I really do. But that’s not the only reason I teach. I teach because I learn a tremendous amount from the process myself. In fact, all that tweaking I said I’ve done to my solutions? That was done in response to reviewing a diverse (sometimes very diverse) set of solutions to the same problem. Often times I’ll get solutions written in a way I would never have used solving the problem myself, and my first reaction is something like “why does this even work?” And then I’ll look at it a little closer (often times using a fair amount of googling myself to find other similar examples) until I understand the implementation and form some opinion about it. There are plenty of times that I’ll get a solution handed to me that I think is cleaner, more elegant and simpler than my own, and so I’ll incorporate what I learned into my future solutions (and let’s not forget back into my own work as well, a topic for another post). And I’ll learn something new. And that makes me happy.

I really like learning new things (thank goodness for that, given how long I’ve been in school!), and I have learned so much over the past couple years that I’ve been teaching. Possibly more than what I’ve learned in all the classes I’ve taken during my graduate career (different things for sure, which makes it difficult to compare amount, but still, you get the idea).

To be sure, there is a balance, and part of my own learning process has been to find out that sweat spot between unstructured free-style assignments (“Write a program that does X with input B. Ready go!”) and an enumerated list of steps that will gently guide a traveler from an empty text file to a working program that meets all the specs. I think I’ve been zeroing in on the balance, and the feedback I get from blogs as well as the assignments themselves is really helpful.

So keep writing, and keep a healthy does of skepticism regarding my philosophy. And ask questions!