About lilengineerthatcould

Nth year Ph.D. student in Electrical Engineering officially studying Control Systems but in reality doing more software simulation than anything else, and thinks a lot about education system and how to make it better.

Response to “Advanced Python exercises”

Recently Matt brought up some good points in his post Advanced Python exercises.  First, the more easily addressed one:

To break up a Python program into multiple functions, just store related functions in a separate `.py` file, then in the main source use import

For example, if you have a file named `hello.py` that contains

def greeting(string):
    return "Hello, {}".format(string)

Then in your main source file (located in the same directory), you can import `hello.py` and all the functions will be available from the `hello` namespace:

import hello

print hello.greeting("world")

Easy as .py! (sorry)

Also in the post Matt talked about Python’s use of “try” and “except” as a means of flow control, as an example:

try:
    mynumber = float(line)
except ValueError as e:
    sys.stderr.write("We have a problem: {}".format(e))
else:
    print "We have the number {}".format(mynumber)

He said “if complex return types are needed such that you’re throwing exceptions to communicate logic information rather than true fatal errors, your function needs to be redesigned.” which I agree with. But I disagree that Python itself relies on this technique, or encourages it, though of course individual developers may miss-use it.

Like the programs they are a part of functions should be written to do one thing and one thing well. In the preceding example the function float converts its argument to a floating point value. The name of the function makes it very clear what it does, and it does its job well. If it can’t do it’s job, then it raises a ValueError exception.  All other built in and library functions I’ve seen work the same way.  Think about the alternative without exceptions, for example, C:

double n = atof(line);
printf ( "We have the number %f\n" , n);
//Except if n == 0 it could be because there was no valid numeric expression in line

According to the documentation for atof:

On success, the function returns the converted floating point number as a double value.
If no valid conversion could be performed, the function returns zero (0.0).
There is no standard specification on what happens when the
converted value would be out of the range of representable values by a double. 
See strtod for a more robust cross-platform alternative when this is a possibility.

This is less than ideal. If we wanted to check if an input even contained a valid numeric string (which we usually would) we’d have to work harder. The strtod alternative provides a means to do that, but we’d still have to do an explicit check after calling the function.  Other C-style functions use a return code to indicate success or failure.  It is also extremely easy to forget to check return codes, in which case the problem may only manifest itself in a crash later on, or worse, not at all, but instead just produce bad output.  These types of problems are very hard to track down and debug.  Using exceptions the program crashes precisely where the problem occurred unless the programmer handles that particular exception.

So to summarize: Each of your functions should have a well-defined job.  They should do only one job and do it well.  If they can’t do their job because of improper input, then they should raise an appropriate exception.  I think following that idiom results in cleaner, easier to debug code.  You could certainly still return complex data types where appropriate, but trying to incorporate success/failure information in a return type will often lead to difficult to debug errors when the programmer forgets to check the return status!

But What Does It All Mean?

We are at that point in the semester where we are all feeling a bit overwhelmed.  We’re past the introductory material, covered a lot of new concepts and ideas, there’s the threat of a midterm on the horizon, and to make matters worse the same thing is happening in every other class as well. It is not surprising that this is also the part of the semester in which I get questions like “why are we learning this?”, “how do these assignments teach me about Unix?”, or “we’ve covered a lot of commands, a lot of new ideas, done several assignments, what should I focus on?” or “how does it all fit together?” These are good questions to ask, and a good exercise to answer, both for me, so that I can better shape the course to meet its goals, and for the participants in the course to solidify the connections between different material and concepts.  I will attempt to start that process by providing my own thoughts addressing these questions: Learning Unix is as much (if not more) about learning a different way of thinking and problem solving with a computer as it is about learning how to use the terminal and acclimating yourself to the different GUI environments.  And while we’re on the topic of user interfaces, there are several.

Rule of Diversity: Distrust all claims for “one true way”

Unlike Windows or OS X which provides only a single graphical environment to its users and a single graphical toolkit to its developers, Unix systems have multiple options for both environments (e.g. Ubuntu’s Unity, GNOME Shell, KDE) and toolkits (GTK, Qt are the most common).  This confusing jumble isn’t just to make it needlessly annoying for users, in fact it is the result of one of Unixes core philosophies: the users has a better idea of what he/she wants to do than the programmer writing an application.  As a result, many decisions are pushed closer to the user level.  This is sometimes listed as a downside to Unix/Linux, since it increases the perceived complexity of the system to the user, but luckily many distributions have been set up to select a sensible default choice for most people who would rather jump right into using their computer, rather than configuring it.  Ubuntu, with its Unity interface, is a good example of this. But it’s empowering to be aware that you aren’t stuck with Unity, you can install and use any of the other graphical environments as well, such as GNOME, KDE, LXDE, and more. Moving on, but keeping The Rule of Diversity in mind, let’s revisit the examples we looked at in class.  The first was related to the homework in which we were asked to read in lines in a white-space delimited record format containing fields for “First Name”, “Last Name”, “Amount Owed”, and “Phone Number”.  A short example of such a file is

Bill Bradley 25.20 Blacksburg 951-1001
Charles Cassidy 14.52 Radford 261-1002
David Dell 35.00 Blacksburg 231-1003

We were then asked to write a program that would print out information for people in ‘Blacksburg’ in the order of “Phone Number”, “Last Name”, “First Name”, “Amount Owed”. A straight forward way to solve this using Python is with the following code snippet

for line in f:
    fields = line.split()
    if fields[3] == 'Blacksburg':
        record = [fields[4], fields[1], fields[0], fields[2]]
        print ', '.join(record)

In class we looked at an alternative solution using list comprehension:

for fields in (r for r in imap(str.split, f) if r[3] == 'Blacksburg'):
        print ', '.join(fields[i] for i in [4,1,0,2])

Both of these examples can be found on github.  They both do the same thing.   The first takes 5 lines, the second 2.  I made use of a few convenience features to make this happen, the first is the imap function, the iterator version of the map function.  The map function is common under many functional programming languages and implements a common task of applying a function (in this case str.split) to every element in a list (in this case f, the file object). This is an extremely common task in programming, but there is no analog in C but luckily the STL Algorithms library gives us std::transform for C+, though the syntax isn’t nearly as clean as Python’s. So the big question is “If I’ve been implementing this idiom all along, without ‘map’, why change now?”  The answer is that implementing it without map will be guaranteed to use more lines of code, with which we know there is a statistically higher chance of making a mistake.  In addition, the implementation would look a lot like any of the other loops that you have written in the same program and you will find yourself pausing at it to ask yourself “What am I trying to do here?”.  Once you learn the concept of ‘map’, using it is much more concise.  Looking at the call to “map” you know exactly what is going on without having to mentally process a “for” or “while” loop.  This idea is generalized to the concept of list comprehension, which is what we’re doing with the rest of that line.  Working with a list of things is really common in programming, and one of the common things we do with lists is to generate new lists that are some filtered and transformed version of the original.  List comprehension provides a cleaner syntax (similar to the set comprehension that you may be familiar with in mathematics) to transforming lists than the traditional “for” or “while” loop would yield.  And more importantly, once you get familiar with the syntax, it lets you more quickly recognize what is going on.  For example, let’s look at two ways of computing a list of the Pythagorean triples for values 1 through 10

triples1 = []
for x in xrange(1,11):
    for y in xrange(1,11):
        for z in xrange(1,11):
            if x**2 + y**2 == z**2:
                triples1.append((x,y,z))
print triples1

and now, using list comprehension:

triples2 = [ (x,y,z) for x in xrange(1,11)
                     for y in xrange(1,11)
                     for z in xrange(1,11)
                     if x**2 + y**2 == z**2 ]
print triples2

I’ve broken the second example across several lines so that it will all fit on the screen, but it could be left on a single line (see the full, working example) and still be just as readable.  Right off the bat we can look at the second version and tell that `triples2` will be a list of tuples containing three values (x,y,z).  We had to work our way down to five levels of nested blocks to figure that out in the first example.  And while you may not realize it because you’re so used to doing it, our brains have a much more difficult time following what is going on in a nested loop, it implies a specific hierarchy that is misleading for this problem. Let’s shift gears just a bit and look at some of the commands I ran at the end of class.  I first wanted to count all the lines of code in all of the *.py files in my current directory:

cat *.py | wc -l

Then I wanted to revise that and filter out any blank lines:

cat *.py | sed '/^$/d' | wc -l

And let’s also filter out any lines that only contain a comment

cat *.py | sed '/^$/d' | sec '/^#.*$/d' | wc -l

(note, we could have combined the two `sed` commands into one, I separated them to emphasize the idea of using a pipeline to filter data) Next I wanted to know what modules I was importing.

cat *.py | grep '^import'

Say I wanted to isolate just the names, I could use the `cut` command

cat *.py | grep '^import' | cut -d ' ' -f 2

If you didn’t know about the `cut` command you could use sed’s `s` command to do a substitution using regular expressions. I will leave the implementation of this as an exercise for the reader.

We notice that there are few duplicates, let’s only print out unique names

cat *.py | grep '^import' | cut -d ' ' -f 2 | sort | uniq

Exercise for the reader: why is the `sort` necessary?

And finally, let’s count the number of uniq modules I’m using

cat *.py | grep '^import' | cut -d ' ' -f 2 | sort | uniq | wc -l

I could have just shown you the final command and said “this prints the number of modules I’m using” but I wanted to demonstrate the thought process to get there. We started with just a two command pipeline, and then started building up the command one piece at a time. This is a great example of another core Unix philosophy: write simple programs that do one thing and one thing well, and write them with a consistent interface so that they can easy be used together. Now I admit, counting the number of modules uses this way required us to start up 6 processes. Luckily process creation on Unix systems is relatively cheap by design. This had the intended consequence of creating an operating environment in which it made sense to build up complex commands from simpler ones and thereby encouraged the design of simple programs that do one thing and one thing well. We could write a much more efficient program to do this task in C or another compiled language, but the point is, we didn’t have to. As you get more familiar with the simple commands you’ll find that there are many tasks like this you want to do that occur too infrequently for writing a dedicated program, but can be pieced together quickly with a pipeline.

So what the heck do these to different topics: list comprehension and command pipelines, have in common?  And why are we using Python at all? Well, Unix’s strength is that it provides a huge wealth of excellent tools and supports a large number of programming languages.  It does everything an operating system can do to allow you, the developer, to pick the best tool for the job.  As we mentioned before, when we’re developing a program the “best tool” usually means the one that will allow us to solve the problem in the fewest lines possible.  Python’s syntax is much cleaner than that of C or C++, and its support of convenience features like list comprehension allow us to implement algorithms that might normally take several loops in a less expressive language in one, easy to understand line.

This has been a rather long post, I hope you’re still with me.  To summarize, don’t worry too much about memorizing every single command right away, that will come naturally as you use them more often (and a refresher is always just a quick call to `man`).  Instead shift your thinking to a higher level of abstraction and always ask yourself “what tools do I have available to solve this problem” and try to pick the “best” one, whatever “best” means in the context you are in.  Unix/Linux puts you, the user and you, the developer in the drivers seat, it provides you with a wealth of knobs and buttons to press, but does little to tell you which ones it thinks you *should* press.  This can be intimidating, especially coming from a Windows or OS X environment which tends to make most of the choices for the user.  That’s ok, and to be expected.  With practice, you will learn to appreciate your newly discovered flexibility and will start having fun!

I want to know what you think! Share your thoughts on what we’ve gone over  in class, the assignments we’ve done, and the reading we’ve discussed.  How do you see it all fitting together?

Mary Poppins

I just finished up Ruby – Day 3 from Seven Languages in Seven Weeks.  I can’t say I fully grok everything that I’ve done, but Ruby seemed fairly straight forward and in many ways similar to Python (though I have noticed Rubyites tend to be able to on a moments notice generate a long list of why it’s better than Python).

I had originally wanted to learn Ruby because I’ve been trying to re-launch my website and blog using nanoc, which is written in Ruby.  While strictly speaking it isn’t necessary to understand Ruby to use nanoc, any type of customization will require writing and manipulating Ruby code, and knowing me, I’ll want to do some customization.  I feel I understand enough of the syntax and a bit of the power-features of the language to get by understanding the bits of nanoc that I might be customizing, but I definitely have a long way to go with a lot more experimenting before I could say I’m comfortable with the language, but I suppose that’s to be expected!

Maybe it’s due to my very preliminary understanding of it, but I feel kind of let down by all the hype.  Perhaps it’s because it’s an OOP language, and so in many respects similar to Python, or even C++ in terms of how the code is structured, but I didn’t really get terribly excited about it as I was working with it.  I’m really looking forward to the other languages in the book and breaking away from OOP thinking.  From the descriptions, it sounds like each of them encourage a different type of thinking when writing code and designing a program.  Even though it’s last on the list, I I’m going to start the Haskell section next, mainly because I’ve already dabbled in it a bit, and I’d like to write up an assignment using a functional programming language, so I should probably learn enough of one to be able to actually write up a solution!

Comment on A better attempt by lilengineerthatcould

I’ve found that what works for me as far as synchronizing data across OSes (and across machines) is using “cloud” services whenever possible. In addition to the obvious contenders like gmail for mail, the Google Chrome browser has a nice sync feature that allows you to synchronize settings, bookmarks and even open tabs across devices (for all intents and purposes, two OSes dual booted on a single machine can be thought of as two separate devices). If you do want to use a local email client, instead of a webmail interface, be sure to tell your client to connect to the mail server via the IMAP protocol instead of POP. IMAP keeps the data on the server, making it possible to synchronize across multiple devices (though I don’t know if that applies to everything Outlook does).

Posted in Uncategorized

Rule of Diversity: Distrust all claims for “one true way”.

I’ve been programming simulations and algorithms in C++ for several years now, but it’s only been in the last year or so that I’ve really began to appreciate the advantages of diversifying ones language repertoire.  My conversion began after reading The Art of Unix Programming by Eric Raymond last year while exploring new material for ECE2524.  In the book Raymond lays out a list of design rules making up the Unix philosophy and explains how following these rules has produced clean, powerful, maintainable code and been the reason why Unix has so easily evolved and adapted and flourished in the fast paced world of technology from it’s roots in 1969 running on under-powered hardware, even for the time.

The Rule of Diversity appears towards the end of the list, but is interesting in that I feel it is one of the few rules that few people, curriculums or corporations outside of the Unix community have yet to seriously embrace.  I hope that people with a Computer Science background can contribute their own view, but in my experience with the limited software design instruction in my Engineering curriculum, and talking to several other people, the focus has been strictly on Object Oriented Programming (OOP), usually using Java or C++.

As a result of this focus on OOP, programmers (including my past self) are encouraged to  adopt a programming paradigm that may work well for some problems, but not well for others.  I’ve learned from first hand experience that forcing an OOP framework on a problem that doesn’t really lend itself intuitively to the features of the framework (data encapsulation, inheritance) leads to an impossible-to-maintain mess consisting of many layers of brittle “glue” code and a spiting headache as soon as concurrent processing is thrown into the mix.

Discovering the Rule of Diversity was refreshing to me for many reasons, as I tend to reject dogma of any kind, but hadn’t really been made aware of the alternatives when it came to programming.  This process lead to me learning Python, which I now use to do the majority of my data visualization, and becoming interested in Ruby for web development and Haskell to learn about functional programming and how I might employ it to more elegantly implement the mathematical algorithms that are a large part of my field of study (Control Systems).

When a friend of mine recommended Seven Languages in Seven Weeks by Bruce A. Tate I was intregued by the idea of learning 7 different programming languages, along with their strenghts, weaknesses, histories and accompanying programming philosophies.  When I found out that two of the languages covered were Ruby and Haskell I was sold.

I have decided to work my way through the book over the next 7 weeks, and write about my experience with each language.  So far I really enjoy the structure of the book, the motivations of the author, and in particular his method of associating each language with a unique fictional character:

  • Mary Poppins from Mary Poppins (1964) – (Ruby) because unlike other nannies of the time she “made the household more efficient by making it fun and coaxing every last bit of pasion from her charges.” And wasn’t afraid to use a little magic to accomplish her goals.
  • Ferris Bueller  from Ferris Bueller’s Day Off (1986) – (Io) ”He might give you the ride of your life, wreck your dad’s car, or both.  Either way, you will not be bored.”
  • Raymond from Rain Man (1988) – (Prolog) ”He’s a fountain of knowledge, if you can only frame your question in the right way.”
  • Edward Scissorhands from Edward Scissorhands (1990) – (Scala) “He was often awkward, was sometimes amazing, but always had a unique expression.”
  • Agent Smith from The Matrix (1999) – (Erlang) “You could call it efficient, even brutally so, but Erlang’s syntax lacks the beauty and simplicity of, say, a Ruby.”  ”Agent Smith … had an amazing ability to take any form and bend the rules of reality to be in many places at once. He was unavoidable.”
  • Yoda from Star Wars: Episode V – The Empire Strikes Back (1980) – (Clojure) “His communication style is often inverted and hard to understand”, “he is old, with wisdom that has been honed by time … and tried under fire.”
  • Spock from “Star Trek” (1966) – (Haskell) “…embracing logic and truth. His character has a single-minded purity that has endeared him to generations.”

Even though I’ve only began working through the exercises in the Ruby section, I’ve already developed an intuition about the “personality” of each of these languages through Tate’s analogies.  He has also chosen the perfect level of detail, not spending any time on the details of the syntax, or building up canonical examples, only focusing on what makes each language unique and powerful and expecting the reader to explore on his or her own to fill in the gaps.

I’ve almost worked through the Ruby section, so expect a post about that soon.  In the mean time, how do you feel about the Rule of Diversity?  Have you been offput when teachers/mentors/bosses treat a particular idea/framework/concept as “The One True Way”?

What Makes Good Software Good?

The first day of class (ECE2524: Introduction to Unix for Engineers) I asked participents the  open ended question “What makes good software good?” and asked them to answer both “for the developer” and “for the consumer”.

I generated a list of words and phrases for each sub-response and then normalized it based on my own intuition (e.g. I changed “simplicity” to “simple”, “easy to use” to “intuitive”, etc.). I then dumped the list into Wordle to generate these images:

Good Software for the Consumer

Good Software for the Consumer

Good Software for the Developer

Good Software for the Developer

For a future in-class exercise I plan to ask participants to link the common themes that appear in these word clouds back to specific rules mentioned in the reading.