And Now You’re an Anstronaut: Open Source Treks to The Final Frontier

There have been a couple of blog posts recently referencing the recent switch NASA made from Windows to Debian 6, a GNU/Linux distribution, as the OS running on the laptops abord the International Space Station. It’s worth noting that Linux is no stranger to the ISS, as it has been a part of ground control operations since the beginning.

The reasons for the space-side switch are quoted as

…we needed an operating system that was stable and reliable — one that would give us in-house control. So if we needed to patch, adjust, or adapt, we could.

This is satisfying to many Open Source/Linux fans in it’s own right: a collaborative open source project has once again proved itself more stable and reliable for the (relatively) extrodinary conditions of low Earth orbit than a product produced by a major software giant. Plus one for open source collaboration and peer networks!

But theres another reason to be excited. And it’s a reason that would not necessarily applied (mostly) to, say, Apple fanatics had NASA decided to switch to OS X instead of Debian. And that reason has to do with the collaborative nature of the open source movement, codified in many open source licenses under which the software is released. Linux, and the GNU tools, which together make up a fully functional operating system, are released under the GNU General Public License. Unlike many licenses used for commersial software, the GPL esures that software licenses under its terms remains free for users to use,modify and redistribute. There are certainly some strong criticisms and ongoing debate regarding some key aspects of the GPL, especially version 3, the point of contention mostly lies in what is popularly called the “viral” effect of the license: that modified and derived work must also be released under the same license. The GPL might not be appropriate for every developer and every project, but it codifies the spirit of open source software in a way that is agreeable with many developers and users.

So what does this all mean in terms of NASA’s move? We already know that they chose GNU/Linux for its reliability and stability over alternatives, but that doesn’t mean it’s completely bug free, or will always work perfectly with every piece of hardware, which after all is another reason for the switch: no OS will be completely bug free or always work with all hardware, but at least Debian gives NASA the flexibility of making improvements themselves. And there in lies the reason for excitement. While there is no requirement that NASA redistribute their own modified versions of the software, there is no reason to assume they wouldn’t in most cases, and if they do, it will be redistributed under the same license. It’s certainly realistic to expect they will be directing a lot of attention to making the Linux kernel, and the GNU tools packaged with Debian even more stable and more reliable, and those improvements will make their way back into the general distributions that we all use. This means better hardware support for all GNU/Linux users in the future!

And of course it works both ways. Any bug fixes you make and redistribute may make their way back to the ISS, transforming humanity’s thirst for exploring “the final frontier” into a truly collaborative and global endeavor.

What a great class, amirite!?

Ok, so some of you may not agree (yet) but if you know what’s good for you, you will soon enough! There is nothing better than Unix class, and honest to goodness, there has not been a better version of this class than the one we are currently receiving! I’ve taken this class years ago and dropped it due to too many credits and needing to relieve some stress. So what, you say? A legitimate response. Well, had I been in your position and learning all of which we have already learning as a sophomore or hell, anyone younger than my age and class, I would’ve been much better off in almost all my classes in the future. I can say that for a fact. I love this class right now. I know a lot of what we are learning, but that makes me excited for you all! I love seeing that the things I learned all during my co-op experiences are being taught in classes! I love that so much because you all will be able to take more advantage of your co-op and internship experiences and not learn things like git or GNU/Linux generics, but rather, learn all the specialties the company has to over you. I cannot wait to see what you all do in the future. But hey, I will also be the guy keeping you on your toes because I still know some things you don’t. If you want to know more, you now know where to find me.

Well. until next time everyone, stay awesome.

Mattie, out.

Current Tune (Song by Artist): “Wait (The Killabits Remix)” by Adventure Club && Father Said (ft. Skrillex)” by 12th Planet
Current Preferred Language: Python
Current Fun: Making some fun tunes with Ableton Suite 8 :D and blogging with blogs.lt.vt.edu

Re: Dorm automation

I just recently saw Ben's post  on Dorm Automation ideas and felt I should share something similar I've done. Last year, I lived in Barringer Hall, a horrible dorm lacking both air conditioning and a thermostat for the radiator. We had fans in the windows, but it became a huge pain for me to get out of bed to turn them on and off whenever I wanted to change the temperature. Using an Arduino, ethernet shield, WRT54G, and spare components, I set out to create an unnecessarily complex system for controlling my fan wirelessly and satisfying my laziness. As it turns out, this was extremely easy to do by hacking together a bunch of sample code, simple circuits, bash scripts.

I installed OpenWrt on my router, which is a custom firmware designed to add lots of enterprise features on consumer hardware; it runs the Linux kernel, but not the full GNU userspace due to a lack of flash memory. It was already running a HTTP server, so I was able to just write a simple HTML page that included frames of external pages on the Arduino. On the Ardunio, I modified an example web server and had it also interface with a LM34 temperature sensor to display its output value, which was fairly easy to do. Hardware was equally simple, as I only needed a NPN transistor to amplify the signal from a digital ouptut pin and operate a solid state relay which controlled a 120VAC extension cord. I ended up putting it in a large enclosure to discourage questions from curious fire marshalls. While the whole setup was one big kludge, it worked well and I was the envy of my fellow engineering hallmates.

Unfortunately, I don't think I saved my final code, so I can't share it. I had planned to replace this whole setup with a more robust python daemon, but soon moved to an apartment with a real thermostat and disassembled it. In the future, I plan on replacing this with either something python based on a laptop that controlls a parallel port or a Cerebot board I have left over from 2534. My eventual goal is to have something more complex than Zack Anderson's setup.

Linuxifying my PC ‘scope, Part 2: Beginning

Today I started the process of creating a third-party driver for my USB oscilloscope, the Velleman PCSGU250. I began reading up on Linux device drivers in general (in Linux Device Drivers, 3rd Edition). It doesn’t seem all that difficult, to be honest. Of course, I’m only on chapter two, so we’ll see.

More importantly for this project, I figured out a method of actually determining what data gets sent to and from the oscilloscope during normal operation. At first, I thought the simplest thing would be to reboot into Windows and find some USB introspection tools that work in Windows. So I did that, but I couldn’t actually find anything that seemed to work for examining USB packets. Did a bit of research, and found that it’s actually easier to sniff Windows USB packets in Linux than in Windows.

It’s pretty simple to set up. I have Windows running inside a KVM instance (I assume VirtualBox would work, too), with the USB device forwarded to the virtualized machine. It’s then possible to use what’s known as the “usbmon” kernel module to monitor the USB traffic on the port.

"But that’s a bit gross to wade through using the standard interfaces, isn’t it?" you ask. Yes, it is. But never fear! Turns out, libpcap, everyone’s favorite network packet analysis library, also supports USB packets! And as we all know, libpcap has an extremely user-friendly frontend known as Wireshark!

So here is a screenshot of my current setup:

You can download a couple of the captures I created, and read a little more about the project, here.

I hope that, over this break, I’ll be able to actually get a driver written for this device. By doing this I expect to learn how to actually write Linux device drivers, and hopefully also save myself some money when I would otherwise need to buy a new DSO or a better PC scope that would work with Linux.

Kerberos

Lately, I've been playing with Kerberos, which is an interesting protocol designed to solve a number of problems with mutual authentication. In larger networks, it's very convenient to have a central authentication database so users can use their same credentials across many machines. This would ordinarily be a simple problem to resolve; however, workstations and other services can't be trusted. Since one rogue workstation can compromise everyone's credentials, you need a system to verify both the authentication server and the client, which is where Kerberos comes in. Kerberos uses a "ticket-granting-ticket" system in which users authenticate with a password to a centralized server, which gives them a token that can be used to prove their identity to any "Kerberized" service. This is extremely convenient for single-signon applications in which it's a pain to have to enter a password for each service; additionally, users can manage a single password for all applications ins a Kerberos realm without any security risks.

One thing I've found useful is Kerberized SSH access, which lots of large institutions (e..g. university departments) happen to offer. Ordinarily, you have to install a SSH key on all machines you'd like to use or remember a password for all of those; under best-practices use, users have encrypted keys that require a password at each use. Kerberos can maintain a more secure environment while generally being more convenient, since tokens last for 24 hours and can't easily be stolen like SSH keys. Also, if a key needs to be revoked, Kerberos can destroy all tokens at once, which is beneficial if you forget which servers your key is on.

If you happen to have Kerberos credentials, it's generally fairly simple to setup with SSH. I only needed to include this line in my $HOME/.ssh/config:
Host *
    GSSAPIAuthentication yes
    GSSAPIDelegateCredentials yes
Getting a Kerberos token tends to only require running kinit user@REALM like such :
matt@badwolf> kinit user@ECE.VT.EDU
Password for user@ECE.VT.EDU:
matt@badwolf> klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: user@ECE.VT.EDU
  
Valid starting     Expires            Service principal
12/20/12 17:30:36  12/21/12 17:30:32 krbtgt/ECE.VT.EDU@ECE.VT.EDU
and then you can ssh without a password. You need to run kinit -R before your tokens expire if you don't want to have to enter a password to authenticate again. I've been meaning to daemonize this for convenience, but haven't had a need to lately. As a shameless plug, VTLUUG now offers free Kerberized shell accounts for those that come to meetings.

Create an Assignment Assignment

I think this assignment was a great idea. It allows the students to add input into what they think this course should cover. For this assignment, I chose to have the reader write a bash script file. Scripting is an important tool that Linux provides. It allows you to execute lines of code without you having to type all of them in. It also allows for conditional statements to change what will execute.

In my assignment, I had the reader create a bash script to check for changes on a website. This would be particularly useful when checking for some sort of update. For instance, if you want to see if VT has canceled class and don’t want to check the website continuously, you could use this script. The assignment that I created specifies that the bash script do the following in an infinite loop:

  1. Download an html web page
  2. Wait some amount of time
  3. Download the same html webpage
  4. Display the differences between the two files

In the finish script, I ended up using:

  1. wget
  2. cat
  3. sed
  4. sleep
  5. diff

While working on the final script, I found it easier to remove all of the html tags with:

sed –e :a –e ‘s/<[^>]*>//g;/</N;//ba’

Then I would use diff and pipe that to another sed command to get the formatting I wanted.

Throughout the completion of the assignment, it forced me to gain a more in depth experience with mostly sed.

Final Project Near Completion

Our final project has reached completion. Everything works, the only thing left to do is to play the game for hours on end and try to produce errors, as well as catch spelling mistakes and proper spacing. Since the last post, a great deal of thought went into considering the fighting style of the game. We ended up implementing a Pokémon style, where you take turns with the zombie hitting each other. The player has a plethora of weapons that are obtained thought their navigation of the game. Ammo will be limited and an infinite use knife will be provided at the beginning of the game. The fights also have random components like weapon accuracy and chance of running into a zombie.

It is also worth mentioning the save and load game functions. They both extensively used python’s pickle module. It is used for object serialization. It can dump and load variables to/from files. To see more about this module, you can check it out here.

In all, this project has extensively used python’s dictionaries to store all the needed data. Even our list of available actions is stored in a dictionary. Needless to say, this project has given me greater insight into dictionaries, even more so than Homework 4 did, which dealt with maintaining an inventory of parts.

LUUG Meetings

Throughout the semester, we had an assignment to go Linux and Unix Users Group for credit. So I decided to go check it out the club. However, I was kind of over-whelmed by the information that they were discuss at that meeting. The environment made it feel like everyone in the room had to be adequate user of Linux or UNIX.

I definitely want to give the club the benefit of the doubt. I might have walked into an officers meeting or certain where veteran Linux or Unix users come to speak.

I feel like this club should I have separate meetings for Beginner and Expert users, just some new users like myself do not get over whelmed with new information. Or the club could set up workshops for setting up Linux or for working new projects.

Maybe I get the wrong experience with the club, but first impressions are tough to forget.

Passwords cracked

An article located here:

http://securityledger.com/new-25-gpu-monster-devours-passwords-in-seconds/

details the creation of a super password-cracking machine. For passwords with weak encryption the machine can take it down in around 6 minutes through brute force. This machine can go through aproximately 348billion NTLM passwords per second. From reading the article it seemed like they could crack any Windows XP password within 6 hours. Good thing we’re all running Linux now. But wait, further down in the article they explain that they can brute force through aproximately 77 million MD5crypt passwords. Looks like Linux isn’t so safe either.

In the article they mention the two open sourced technologies used to create this monster. One being  VCL (cirtual open computing language) and HashCat (a password cracking program). The process uses wordlists or dictionarylists to grab the passwords. Having just been to a cyber security meeting on this topic made this article much easier to read. You all should come out, and watch your passwords!

The Tides of Change?

As Linus Torvalds has mentioned in several video interviews, probably the main reason Linux has been lagging behind in the desktop market is that it doesn’t come pre-installed on desktop hardware, and the average computer user just isn’t going to put forth the effort to install a different operating system and configure it* than came with their new machine. Recently Dell caused a bit of excitement with their release of an Ubuntu addition of their “XPS 13: The Ubuntu developers” edition laptop.  To be fair, this is not the first machine that Dell has offered with Linux pre-installed, but it does seem to be the first that they’ve tried pushing to the mainstream (or in this case, developer) community (in the past you really had to make an effort to find the Ubuntu option on their ordering form).  Dell is also not the only desktop distributor to offer systems with Linux pre-loaded (indeed, many of the others exclusively offer Linux machines), but it is probably the brand with the most name recognition to the general audience.  Could this be the beginning of the end of the Microsoft monopoly on the desktop OS market?  I am optimistic!

*Be wary of the blog posts and forum comments that recount stories of installing Linux and being frustrated with the difficulty of getting all the necessary drivers for their hardware and using that as an argument that the OS wasn’t “ready” for prime time.  If you have ever installed Windows on a fresh new machine you will be well aware that it can be just as frustrating.  Windows doesn’t “just work” on the machines you buy because it is a superior OS (it isn’t), it works because the system distributors like Dell take the time to make sure that the necessary drivers for the particular hardware in the machine are all included.