About lilengineerthatcould

Nth year Ph.D. student in Electrical Engineering officially studying Control Systems but in reality doing more software simulation than anything else, and thinks a lot about education system and how to make it better.

Move to the Dark Side

I have officially moved my blog away from blogs.lt.vt.edu to my own website: hazyblue.me, which eventually will host not only my blog, but also my teaching philosophy, CV and other professional- related tidbits. I hope that everyone will follow me over to the dark-side as I continue to write about education, technology and talking cows. I would especially like to hear feedback on my latest post (inspired by the likes of Janet Murray and Alfred Whitehead), but since I haven’t set up a commenting system yet, please respond via your own blog, a tweet, or an email directly to me!

Making Space

How to go about creating a space that facilitates learning? As we learned from Morningstar and Farmer, there is a danger of over-designing the space and often the users don’t end up using the space in the way the designer intended. So we shouldn’t over-design. But there should probably be some kind of structure, right? Or why create the space at all? – my brain

These are some of the thoughts that were going through my head as I set out to plan my final project. Throughout the semester a common focus of mine has been the lack of a space in my own field (Electrical & Computer Engineering) to facilitate making connections between the work that goes on in the department, both teaching and research, and the outside world.

I also became interested in many of the ideas Ivan Illich presented in chapter 6 of “Descooling Society“. Specifically, how would we go about creating a network to facilitate skill exchange and peer matching? Once we have a peer-to-peer network of learners, what role does an “elder” have, and how does one become an elder?

I began by looking at some exiting peer networks and online communities that I have had either direct or indirect involvement in to see how they addressed the questions of structure, but in the space itself, and any imposed structure between peers that imposes some sort of hierarchy, or the notion of an “elder”.

1 The Linux Kernel

It has been several decades since Linus Torvalds’ original announcement of his new operating system:

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
   Newsgroups: comp.os.minix
   Subject: What would you like to see most in minix?
   Summary: small poll for my new operating system
   Date: 25 Aug 91 20:57:08 GMT
   Organization: University of Helsinki

    Hello everybody out there using minix -

    I'm doing a (free) operating system (just a hobby, won't be big and
    professional like gnu) for 386(486) AT clones.  This has been brewing
    since april, and is starting to get ready.  I'd like any feedback on 
    things people like/dislike in minix, as my OS resembles it somewhat
    (same physical layout of the file-system (due to practical reasons)
    among other things). 

since that time the Linux kernel project has grown to be the largest collaborative endeavor in the history of computing. According to one measure, there were 1,316 developers involved with the version 3.2 release, each contributing an average of 10,935 lines of code to the project. The clip found on the linxfoundation’s website summarizes how this tremendous feat is possible:

There is a fairly simple and effective hierarchy in place: individual developers submit patches to senior kernel developer who in turn review and sign off on patches before sending onward to Linus Torvalds for final approval. This is the most dictatorship-like model of any of the networks I looked at, but I think it makes a lot of sense. The stability and usefulness of the Linux kernel depends on all 15 million lines of code working well with one another, a more ad-hoc “allow everyone to contribute, fix errors later” method such as the one embodied by wikipedia probably wouldn’t be effective for this project. It’s worth noting that the rather strict structure doesn’t prevent people from maintaining and contributing to forks of the kernel other than the one maintained by Torvalds. In fact, up until recently, Google maintained its own fork of the kernel for it’s Android operating system.

2 github.com

The versioning control tool git was created by the Linux development community in response to changes in the relationship between the open source community and the company responsible for BitKeeper, the versioning control system used up until 20051. A versioning control system and git in particular, gives developers a powerful tool with which to track changes to software projects and facilitates collaboration by automating to the extent possible, the work-flow required to review changes made by contributors and merge them into one version. But to become a contributor, or find open source projects of interest, you need to know where to look. GitHub provides a hosting service for projects using the git revision control system as well as some social-networking features to help users connect with developers and projects that interest them. Users create profiles, much like they would on other social networking sites, but rather than posts about current gastronomic adventures, GitHub generates feeds of updates and changes made to various software projects.

There isn’t really a notion of a uniform social hierarchy, as everyone is the maintainer of their own project, and may act as contributor to any other project. Individual projects may be “trending” or “featured” depending on activity.

3 stackoverflow.com

Still on the geek theme, stackoverflow is an invaluable resource to many software developers, novice or advanced. Organized into a question and answer format, anyone can post a question or an answer and the site provides tools to improve the quality of questions and up-vote helpful answers. There is a notion of currency, called “reputation”, that members earn through participating on the site. Gaining reputation can unlock certain privileges that are not available to all users. With enough reputation, individuals can be elevated to moderator status, a role I would associate with Illich’s notion of an “elder”.

4 couchsurfing.org

Up until this point the sites I’ve surveyed have been what we would generally refer to “virtual” spaces. The communities and interactions exist primarily in cyberspace. Couchsurfing is a bit different. At its core it is a network of travelers who open their homes to other travelers to provide a uniquely personal and rewarding experience not available from hotels or other travel industry offerings. It is focused on helping people explore the world by connecting with other people in a particular geographic area, remote or local, so many of the interactions take place in the “real” world. However, a thriving virtual community is also an integral part of the couchsurfing experience in the form of discussion forums organized into groups of free formed topics ranging from geographic areas, to helping each other learn new languages, to everything in between.

One of the first questions asked by couch surfing skeptics is “is it safe”? There seems to be a common fear among those who do not couchsurf that most strangers are creeps who would sooner steal your wallet than let you stay for free on their couch. While most of those who join couchsurfing do so because they naturally have a more optimistic view of humanity, safety is a high priority and a clear set of community guidelines are published to help ensure everyone has a safe, enjoyable experience.

Most importantly though, is the system of references in place that enable members to rate the experience they had with other members. References themselves do not directly lead to any increased privileges in the community: members are free to use the information as they see fit. Some opt to only host or otherwise connect with people who have received a lot of positive references while others will accept hosts or guests with few or no references. I have never come across a profile with negative references. I suspect that this is not because there are no negative experiences (in fact I have heard of a couple from witnesses), but that people with negative references don’t last long in the community. While there are many people who would help those new members who do not have any references yet, I think few would choose to welcome someone who has violated the spirit of couchsurfing in some way.

Separate but related to the reference system is the vouching system. This is a more formalized approach to building trust networks and is a more integral part of the structure of the site in that being vouched for can grant certain privileges, namely, the ability to vouch for others. Unlike references, vouching is intended to ONLY apply to people who have met face-to-face, though the enforcement of that rule is left to the community itself.

5 illichvillich.net

The greatest challenge I have faced in the creation of a space for learning has been to identify which aspects of the previously mentioned successful virtual communities should be integrated, and how. At its core, the inspiration came from my understanding of the role english coffee houses played in the 17th and 18th century. While the idea of gathering around a particular beverage was not unknown at the time, the precursor to coffee was ale and along with the switch from a depressant to a stimulant also came a marked change in the intellectual culture surrounding these popular gather spots. People from all walks of life were able to mingle and share ideas and the space became an intellectual breeding ground separate, but connected with the universities of the time. I can only imagine that without any set curriculum the topics of discussion varied greatly with the interest of the participants and the local events of the time.

It goes without saying then (and yet, here I am, saying it), that topics must be defined and controlled by the members. But I don’t want to create yet another forum site.

Of the four networks defined by Illich to enable his vision of a deschooled society:

  • Reference Services to Educational Objects
  • Skill Exchanges
  • Peer-Matching
  • Reference Services to Educators-at-Large

I was most interested in building a Skill Exchange and Peer-Matching network. I feel that if I can create a space that is successful in that goal, the two “Reference Services” networks should follow organically, as it seems the primary challenge there is collecting the information, there are already well defined implementations of “reference services”.

On the face of it, implementing a skill exchange and peer-matching network is just a matter of defining a data structure that can associate a set of skills with a particular individual and then an algorithm that will match people who have mutual skill-learning interests. That’s the easy part. The challenge, which was brought up during the final presentation day, is that most people are probably not going to want to spend any amount of time entering individual skills and interests they have and those they want to learn. Ideally all a user would need to provide are links to existing blogging and micro-blogging feeds, in which case we would need an algorithm that would parse content that is already being created by each user to determine skill sets and interests. This is the challenge, and a complete lexicographic analysis is well beyond my current knowledge of algorithm design. What would be relatively straight forward is grabbing tags from existing structured feeds that provide that information. It would still depend on self-reporting, but ideally self-reporting that is already being done anyway.

The real challenge has been to figure out what more to do. What I have mentioned so far would be useful, but not particularly unique. In fact, couchsurfing already has a notion of “Learn, Teach, Share” built into the profiles, though the system doesn’t really facilitate searching for people based on that information. The more I have talked with Illich enthusiasts, the more it becomes clear that the real potential lies in creating more of a platform for exploring ideas, rather than just a repository of information. The last brainstorming session I had quickly turned from the details of setting up a skill exchange network to the much more difficult questions of

  • how is a unit of information stored?
  • who decides what a unit of information is?
  • how are individual units of information connected with one another?
  • how does a user effectively navigate a space of interconnected units of information?
  • how does a user track their personal path through a network of information nodes?
  • how does a user share their path through a network of information nodes?
  • who writes the information nodes?
  • should nodes have structural support for quizzes/practice exercises to help learners decide if they have mastered a particular topic well enough to move on to another?

And then implementation details involved with all of those are of course somewhat overwhelming at this point.

I plan to have another brainstorming session tonight, with a different group of hackers and see were these ideas lead. In the mean time, I will continue to get a working implementation of a basic skill-matching network up at illichvillich.net, just as soon as I’m finished posting final grades.

And Now You’re an Anstronaut: Open Source Treks to The Final Frontier

There have been a couple of blog posts recently referencing the recent switch NASA made from Windows to Debian 6, a GNU/Linux distribution, as the OS running on the laptops abord the International Space Station. It’s worth noting that Linux is no stranger to the ISS, as it has been a part of ground control operations since the beginning.

The reasons for the space-side switch are quoted as

…we needed an operating system that was stable and reliable — one that would give us in-house control. So if we needed to patch, adjust, or adapt, we could.

This is satisfying to many Open Source/Linux fans in it’s own right: a collaborative open source project has once again proved itself more stable and reliable for the (relatively) extrodinary conditions of low Earth orbit than a product produced by a major software giant. Plus one for open source collaboration and peer networks!

But theres another reason to be excited. And it’s a reason that would not necessarily applied (mostly) to, say, Apple fanatics had NASA decided to switch to OS X instead of Debian. And that reason has to do with the collaborative nature of the open source movement, codified in many open source licenses under which the software is released. Linux, and the GNU tools, which together make up a fully functional operating system, are released under the GNU General Public License. Unlike many licenses used for commersial software, the GPL esures that software licenses under its terms remains free for users to use,modify and redistribute. There are certainly some strong criticisms and ongoing debate regarding some key aspects of the GPL, especially version 3, the point of contention mostly lies in what is popularly called the “viral” effect of the license: that modified and derived work must also be released under the same license. The GPL might not be appropriate for every developer and every project, but it codifies the spirit of open source software in a way that is agreeable with many developers and users.

So what does this all mean in terms of NASA’s move? We already know that they chose GNU/Linux for its reliability and stability over alternatives, but that doesn’t mean it’s completely bug free, or will always work perfectly with every piece of hardware, which after all is another reason for the switch: no OS will be completely bug free or always work with all hardware, but at least Debian gives NASA the flexibility of making improvements themselves. And there in lies the reason for excitement. While there is no requirement that NASA redistribute their own modified versions of the software, there is no reason to assume they wouldn’t in most cases, and if they do, it will be redistributed under the same license. It’s certainly realistic to expect they will be directing a lot of attention to making the Linux kernel, and the GNU tools packaged with Debian even more stable and more reliable, and those improvements will make their way back into the general distributions that we all use. This means better hardware support for all GNU/Linux users in the future!

And of course it works both ways. Any bug fixes you make and redistribute may make their way back to the ISS, transforming humanity’s thirst for exploring “the final frontier” into a truly collaborative and global endeavor.

Something Funny about School (Part 1)

In the last “adult” class of the semester the reading du jour was Scott McCloud’s “Time Frames”, a comic about comics. Specifically, how the passage of time, including motion, are depicted in the medium. The discussion that started as we talked about the gap in between frames of a comic got me thinking…

...a dangerous past-time, yes I know

panel2

panel3


Sometime in the not-so-distant past…
heraclitus1

heraclitus2

heraclitus3


sea1

jack_and_rose

titanic1

iceburg1

iceburg2

brachristochrone1


brachristochrone2

chart1

chart2

Not making this up: The Medium is the Message

I’ve been meaning to post something about the interesting interactions between the unix commands fortune and cowsay. This is not that post. Long story short, fortune prints a random tidbit to the terminal and cowsay takes an input string and draws an ASCII art cow with a speech bubble containing the text you sent to the command. In addition, there are a number of different “cows” that can be drawn and so when pairing up random fortunes and random cows can be a parataxis gold mine. Here’s the command I have in my bash login script that does just that:

fortune -s | cowsay -f $( find /usr/share/cows -name '*.cow' | shuf | head -n 1 )

More on that in the “real” post, but for now, I couldn’t pass this up which popped up when I opened a new terminal window just now:

s _________________________________ 
/ "The medium is the message." -- \
\ Marshall McLuhan                /
 --------------------------------- 
       \   \_______
 v__v   \  \   O   )
 (oo)      ||----w |
 (__)      ||     ||  \/\

ponder that for a bit.

As Scholars are Wont to do: Poetry of the brachristochrone problem

Time is an important dimension to consider when tracing the path of a media-represented concept. This is a theme that has stuck with me since reading Brenda Laurel’s “The Six Elements and Causal Relations Among Them” and coming across this brilliant bit of prose:

As scholars are wont to do, I will blame the vagaries of translation, figurative language, and mutations introduced by centuries of interpretation for this apparent lapse and proceed to advocate my own view. – Brenda Laurel, “The Six Elements and Causal Relations Among Them”

Which, should be noted, already inspired a previous post.

The particular idea that Laurel was referring to was one first explored by Aristotle over 2000 years prior, and I was struck with the eerie feeling of being transported into a conversation spanning millennia as well as minds. Like Michelangelo chipping away at a block of marble to reveal the sculpture hidden within, poets and philosophers carve away at the layers of representation in an attempt to reveal the essence of a concept encased within.

It occurred to me that the same process is occurring in the sciences as well, although usually the bricks of representation1 are mathematical symbols rather than linguistic. In a moment of serendipity, while I was musing over the conversation of this morning’s class, I opened a book on Nonlinear Geometric Control Theory that, a few weeks back, I had checked out of the library on a whim. If the title means little to you, rest assure, it does me as well. This is not a topic I am too familiar with, but for a reason I can not fully explain, ever since I was introduced very briefly to differential forms in a real analysis class I suspected that the language of differential geometry had the potential for elegant representations of optimal control problems. Low and behold, I opened this book and immediately saw a paper in which the authors, Sussmann and Willems, explored several representations of the brachistochrone problem, concluding with a differential-geometric approach that they claimed as the most elegant thus far.

The brachristochrone problem

In 1696, Johann Bernoulli asked a question that has become a classical problem used as motivating example for calculus of variations and rears its head again in optimal control2:

Given two points A and B in a vertical plane, what is the curve traced out by a point acted on only by gravity, which starts at A and reaches B in the shortest time. – The bracristochrone problem

The brachistochrone (shortest time) problem

The problem itself is older than Bernoulli: Galileo had conducted his own exploration in 1638 and incorrectly deduce the solution to be the arc of a circle.3 Bernoulli’s revitalization of the question is what led to the first correct answer, that the solution is a cycloid.

Sussmann and Willems summarize the various solutions to the problem as follows:

  1. Johann Bernoulli’s own solution based on an analogy with geometrical optics,
  2. the solution based on the classical calculus of variations,
  3. the optimal control method,
  4. and, finally,

  5. the differential-geometric approach

- Sussmann & Willems

[place holder for a more in-depth summary...ran out of time tonight]

They demonstrate how each successive method refines the solution space and eliminates unnecessary assumptions to approach what could be considered the essence of the problem itself. They end with the differential-geometric approach with the claim that it is thus far the best at elegantly capturing the problem, but it hardly seems like this will be the final word on such a well traveled challenge.

So it seems the path the poet takes in exploring the nature of life is not all that dissimilar from the path the mathematician, scientist or engineer takes, only the tools differ, and even then, the difference is often over stated. They are all just different colors of bricks1.

Footnotes and References

1 David, write about your brick analogy so I have something to link to!

2 worthy of an entirely separate post: through my digging around for more details about the brachistochrone problem I discovered a paper discussing it as a Large Context Problem, an approach to education that is of extreme interest to me that I now have a name for!

3 The brachristochrone problem

Sussmann, Hector J. and Willems, Jan C. The brachistochrone problem and modern control theory

Nyquist frequency of a concept?

We get stories much faster than we can make sense of them, informed by cellphone pictures and eyewitnesses found on social networks and dubious official sources like police scanner streams. Real life moves much slower than these technologies. There’s a gap between facts and comprehension, between finding some pictures online and making sense of how they fit into a story. What ends up filling that gap is speculation. -Farhad Manjoo, Breaking News Is Broken

This past week in ECE3704 (Continuous and Discrete Systems) we have been exploring what happens to the information in a continuous-time signal when it is sampled and again what happens when discrete samples are reconstructed into a continuous time signal. Here is an example:

https://blogs.lt.vt.edu/shebang/files/2013/04/thumb.signal.png

The solid blue line is the continuous time signal \(x(t)=e^-t\) and the black dots are samples of the signal taken at intervals of 0.5 seconds (the sampling period). This process of sampling a signal is a necessity of living in a (mostly) continuous-time world when processing data using a (theoretically) discrete-time device such as a digital computer. Once we have our information nicely digitized we can poke, prod and manipulate it using the wealth of digital tools at our disposal before converting it back to a continuous-time representation that we then observe, commonly in the form of visual or audible signals. The question arises: what does the act of sampling and reconstruction have on the final output that we see and hear? How faithful is the input/output relationship of our original and constructed signal using our digital technology to the ideal relationship. An illustrative example would be the combination of wireless and wired networks that make up the communication channel used to transmit the data associated with a cell phone call. Ideally, the receiver would hear an exact transcript of the audible information as produced by the caller, or at least as exact as it would have been if the two were face to face. In reality the best we can usually hope for is an approximate reconstruction that is “good enough”. Here is the reconstruction of our previously sampled signal, overlaid on the original for comparison.

thumb.signal_con.png

Clearly the reconstructed signal is not a faithful duplicate of the original, but why is this the case, and what could we do to make it better? We gain some insight by taking a Fourier transform of the sampled signal to generate a frequency spectrum which we can then compare with the frequency spectrum of the original.

thumb.magX.png

The ability to view the signal from this vantage point makes some features of the sampling process easy to see that otherwise would not be obvious looking at the time-domain representation, namely, we note that the frequency spectrum of the continuous time signal can potentially live on the entire infinite frequency axis,

thumb.magXstar.png

while the sampled signal is restricted to a finite frequency interval, the interval between plus and minus and half the sampling frequency, between the dashed red lines (it turns out that the frequency spectrum of all real-valued signals has a negative component that mirrors the positive spectrum. Who knew?).

The dashed red lines are drawn at plus and minus 2*pi radians/sec, the information starts repeating after this, shown by the green curve. Note that 2*pi rad/sec is one half the sampling frequency of 4*pi rad/sec. Viewing the information in this form allows us to intuit why we might not be able to uniquely reconstruct all sampled signals perfectly: The act of sampling is restricting the domain on which we can encode information to a finite interval, so we can conclude that sampled versions of continuous-time signals that make use of the entire infinite frequency axis will never contain all the information of the original signal, and for those signals that are naturally bandlimited we will need to choose our sampling frequency in such a way that finite frequency interval of the discrete-time signal is large enough to contain all the information in the original. This leads to the Nyquist sampling theorem which states that if the sampling frequency is greater than twice the bandlimit of the original signal then the signal can be uniquely reconstructed from its samples.

In a recent post, Adam commented that we (humans, though this most likely applies to any non-humans able to read and comprehend this just as well) engage in a sampling and reconstruction process every time we communicate a thought. Concepts and ideas live in the continuous domain (at least, so it seams, not being an expert in neuroscience perhaps one could make a sound argument that thoughts are in fact discrete, but for today’s purposes I think it would not be egregiously inaccurate to compare them to continuous-time signals), and yet there are only so many words we have available to us when communicating those thoughts. What’s more, we can’t be sure that another sentient being will hear the words we are using and reconstruct our original thought perfectly. In fact, it’s likely that this imperfect reconstruction of communicated thought results in a great deal of innovation and creativity and “thinking outside the box”, so it’s certainly not always a bad thing, just a thing. But it’s a thing we don’t really have any tools to quantitatively analyze. How much off the original information was lost or distorted by the conversion into language or another medium? How far from the original thought is the reconstructed thought (assuming we can even define a metric space for concepts).

It would seem that some thoughts, like signals, have bandlimited information content, while others may not. The feeling expressed by the phrase “I am thirsty” is fairly well understood (even if we don’t really understand what the essence of “I” is). There are some variations: “I am very thirsty”, “I am parched”, etc. but I’m going to go out on a limb and say that that particular thought can be accurately communicated in a finite number of words (generally about 3). I’m not sure I could make that claim about some others, like the concept of “I”. Are there more parallels between sampling theory and communication through a medium? It would seem that like signals, some ideas can be sampled and reconstructed accurately, while others can not. Are there any tools available that parallel Fourier analysis for signals that could yield a different view of the information contained in a raw idea or concept? Does it even make sense to talk about such a tool?

ToDo: Make ECE2524 Obsolete

Why would I want to eliminate the course that I’ve been teaching the past four semesters, that I have put so many hours into to update content, create new assignments, written (and re-written each semester… another topic altogether) a set of scripts to facilitate reviewing stacks of programming assignments and generally had a great time with?

Well, because I don’t think it should be a separate course to begin with. As many have noted, and I have agreed, ECE2524 in many respects is a kind of a “catch-all” course for all those really important topics and tools (version control, anyone?) that just don’t get covered anywhere else. It is also officially (but not rigorously enforced in the form of a prereq) to be an introduction to more advanced software engineering courses, so it has the general feel of a programming course.

I think programming (and *nix OS usage and philosophy) is too important to delegate off to a 2 credit course and treat separately from the rest of the engineering curriculum, an idea that was solidified after reading an excerpt from Mindstorms by Seymour Papert.

I began to see how children who had learned to program computers could use very concrete computer models to think about thinking and to learn about learning and in doing so, enhance their powers as psychologists and as epistemologists.

Papert is a strong advocate to introducing computer programming to children at an early age and using it as a tool to learn other disciplines

The metaphor of computer as mathematics-speaking entity puts the learner in a qualitatively new kind of relationship to an important domain of knowledge. Even the best of educational television is limited to offering quantitative improvements in the kinds of learning that existed without it… By contrast, when a child learns to program, the process of learning is transformed. It becomes more active and self-directed. In particular, the knowledge is acquired for a recognizable personal purpose.

It goes without saying that a solid understanding of math is crucial for any of the STEM fields, but computers and programming can also encourage engagement with other fields as well, though that is not the focus of this post.

Along with being a useful skill to have, programming teaches a systematic way of thinking about a problem, and crucially shifts the model of learning from one that embodies a “got it” and “got it wrong” binary state to one that encourages the question “how do I fix it?”. As Papert notes, and I can personally attest, when writing a program you never get it right the first time. Becoming a good programmer means becoming an expert at tracking down and fixing bugs.

If this way of looking at intellectual products were generalized to how the larger culture thinks about knowledge and its acquisition, we all might be less intimidated by our fears of “being wrong.”

Some strong arguments for the symbiosis of programming and learning valuable thinking skills at an early age. But the benefits don’t disappear at the college level, especially in a field such as engineering in which learning programming for the sake of programming is a valuable skill (there are several required classes on the subject, so you know it must be important. Slight sarcasm, but it’s true, regardless of how cynical we agree to be about the way classes are structured and the curriculum is built for us). If programming can help engage with learning mathematics, and as a side effect get us thinking about how we think, and shift our view of learning to a more constructive one, then can’t we get at least the same positive affects if we apply it to more advanced concepts and ideas? It doesn’t hurt that a good chunk of engineering is mathematics anyway.

The wheels really started turning after the first day of guest-lecturing for Signals & Systems. Here’s a course that is a lot of math, but critically foundational for learning how to learn about how the world works. That may seem a little embellished, especially for those not familiar with the field (Signals & Systems crash course: a system is anything that has an input signal and produces and output signal, e.g. a car (input is gas/break, output is speed), a heart beat (input is electrical signal transmitted along nerves, output is muscle contraction or blood flow), the planet (so many systems, but treating atmospheric concentrations of CO2 and other gases as an input and the average global temperature would be one example of a system we would be interested in studying)). Signals & Systems provides a set of tools for exploring the input/output relationships of… anything.

So why is it taught from a set of slides?

What better way to really engage with and understand the theory than USE it? Now most educational budgets wouldn’t be able to cover the costs if everyone wanted to learn the input/output behavior of their own personal communications satellite, but the beauty of Signals & Systems, and the mathematical representations that it embodies, is that everything can be simulated on a computer. From the velocity of a car, to blood flow caused by a beating heart, to the motion of the planets and beyond.

I envision a Signals & Systems course that is mostly programming. People will argue that the programming aspect of the material is just the “practical implementation”, and while that’s important, the theory is critical. Yes, the theory is what helps us develop a generalized insight into different ways of representing different types of systems and what allows us to do a good deal of design in a simulated environment with greatly reduced risks, especially when say, designing new flight controls for a commercial jet.

But I think the theory can be taught alongside the programming for a much richer experience than is obtained by following a set of slides. You want to understand how the Laplace transform works? What better way than to implement it on a computer. I guarantee you, if you have to write a program that calculates the Laplace transform for an arbitrary input signal, by the time you’re done with the debugging, you’re going to have a pretty good understanding of whats going on, not to mention a slew of other important experiences (how do you solve an integral on a computer anyway?).

Talking about the differences between continuous time systems and discrete time systems is taken to a whole new level when you start trying to simulate a continues-time system on a computer, very much a discrete-time system. How do you even do that? Is it sufficient to just use a really really small time step?

So yes, I think the best case scenario would be one in which ECE2524: Intro to Unix for Engineers is obsolete1. Not because the topics we cover are unimportant, quite the contrary, they are so important that they should be providing a framework for learning engineering.

Footnotes:

1 I’ve focused primarily on the programming aspect of ECE2524 here, but those of you who know me and have taken the course with me know that the Unix philosophy is a big part of it as well. Integrating the programming aspects into other coursework would of course not address that. I’m sure, with a little thought we all can think up a clever way of introducing the *nix philosophy and generally the whole concept of thinking about a philosophy when thinking about engineering, and what that even means, with every other course. Because well, it should be an integral part of everything else we learn.

Both sides of auto-grading argument miss the point

A recent story in the New York Times covers a software program by nonprofit EdX that will soon be available for free to any institution that wants to use it. Using sophisticated machine learning algorithms to train its artificial intelligence, the software will grade essays and short response questions and provide nearly instant feedback. Naturally there are strong supporters for the new software touting it for “freeing professors for other tasks” (like what?). And just as naturally there are strong critics who have formed a group called Professors Against Machine Scoring Essays in High-Stakes Assessment. From the group’s petition:

Let’s face the realities of automatic essay scoring. Computers cannot “read.” They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others.

While criticism is certainly warranted, I find the quote to be somewhat bullish. Can these people really claim that they understand how they are able to read and measure the essentials of effective written communication well enough that they can look at a computer and say with confidence, “that can not do what I am doing, and here’s why”? It very well may be that current AI programs do not have the ability to comprehend written communication to a degree necessary to assign grades, but to make the argument that the software shouldn’t be used because “computers cannot ‘read’”, as if that were a self-evident fact is just poor communication.

Now to be fair, I disagree with the supporters of the software as well.

“There is a huge value in learning with instant feedback,” Dr. Agarwal said. “Students are telling us they learn much better with instant feedback.”

Ok, well, not that part, I agree with that part in principle. But what kind of feedback? Supposedly the software can generate a grade and also comments whether or not the essay was “on topic”. So a student could get instant feedback, which is great, and then edit and modify, which is great, and resubmit, which is also great… and then what? What would they be learning?

I promise to be highly skeptical of any answer to that question that isn’t “how to write an essay that receives high marks from an automatic grading AI”.

All this talk about feedback. What about feedback for the professor? I find reading through 60 essays just as tedious and time consuming as the next out-of-place grad student in a department that doesn’t value teaching, but I also recognize that reading those essays is a valuable way for me to gauge how I’m doing. Are the concepts that I think are important showing up? Are there any major communication issues? What about individuals, are some struggling, what can I do to help? How will I learn my students’ personalities and how that might affect their personal engagement with the material? How will I learn to be a better educator?

Granted, even though 60 feels overwhelming, it’s nowhere near 200 or more. I can’t even imagine trying to read through that many assignments myself. I’m confident that if I were force to I would not emerge with my sanity intact. This problem does not go unaddressed.

With increasingly large classes, it is impossible for most teachers to give students meaningful feedback on writing assignments, he said. Plus, he noted, critics of the technology have tended to come from the nation’s best universities, where the level of pedagogy is much better than at most schools.

“Often they come from very prestigious institutions where, in fact, they do a much better job of providing feedback than a machine ever could,” Dr. Shermis said. “There seems to be a lack of appreciation of what is actually going on in the real world.”

An “A” for recognizing the problem. But the proposed solution is nothing more than a patch. In fact, it’s worse, because it is a tool that will enable the continual ballooning of class size. And to what expense? Why don’t you rethink your solution and have it on my desk in the morning. I can’t promise instant feedback, but maybe, just maybe, the feedback provided will be the start to moving in a direction that actually addresses the underlying problems, rather than just using technology to hide them.

Cheating by the Rules

Is it still cheating if the rules are made up?

Successful and fortunate crime is called virtue.

  • Seneca

During our discussion about Lucasfilm’s Habitat earlier in the week we talked a lot about how the lessons learned by the designers for effectively building a virtual world mirrored a lot of what we knew about building a virtual world in the real world.

Woah, what? Don’t I mean mirrored by our real actions in the real world? Or the real things we have built in the real world?

Not really, the more I thought about it, the more I notice similarities between the virtual environment that Morningstar and Farmer built and our own world, commonly called “the real world”.

Keeping “Reality” Consistent

As we have discussed before in class, it seems that most people understand on some level that their own core values differ, sometimes significantly, from what our society visibly places value on. We talked about what it means to truly experience life and many of us painted images of being in nature and absorbing all the sights, sounds, and sensations that were available. We know what we need to sustain life (sufficient food and shelter) and we know what we need to be happy (close, meaningful relationships with those around us). And yet look at the world we have created for ourselves. Today, we essentially live in what Morningstar and Farmer called an “experiential level”, this is the level in which the rules that we follow are the rules that were constructed to facilitate the illusion. It is somewhat removed from the “infrastructure level”, and it seems as time goes on, there is less and less “leakage” between the two levels. Morningstar and Farmer would be proud.

To be fair, they were writing in the context of a game designed for the purposes of allowing a player to (temporarily) allow themselves to be overcome by the illusion for entertainment. If that’s the goal, then yes, a leak-free relationship between the “infrastructure level” and the “experiential level” would seem necessary. But have we inadvertently set up the same type of structure in our “real” world?

It is both interesting and suggestive that we often use the same word to describe the rules we use to govern ourselves as the rules we have deduced govern the universe. The laws of physics are not ones that often are showcased in our courts of law, and yet the concept of a “law” seems to be somehow applicable to both. Is it surprising then that we often think a particular action is impossible because “it’s against the law”

There was a time when many protested that humans were not meant to fly, for it went against the laws of nature. Gravity, and a dense body structure, kept us firmly rooted on the ground, who were we to argue with the laws of nature? And yet, we figured out a way to cheat.

But it’s not really cheating if we’re playing by the rules. We just learned the rules well enough to discover a loophole. Of course, I am somewhat intentionally misconstruing the situation. The “law” of gravity never said “humans may not fly” (and for now let’s ignore the pesky question of whether or not we are flying, or our machines are and we’re just along for the ride). The point is, we are continuously refining our understanding of the “laws” of nature, but the laws themselves, the underlying equations that govern the universe, are not themselves being modified with our increased understanding.

Our own laws and abstractions are of course much more mutable, but it makes sense that we wouldn’t treat them as such. After all, laws would quickly loose their meaning if we were re-writing them willy-nilly. But I wonder if sometimes we are so immersed in our own virtual reality that we forget that it is virtual.

The resent series of financial “crisis” comes to mind. During the whole debacle, every time someone talked about the impending doom that would be upon us if we didn’t act (or if we did, or if we acted incorrectly…) I wanted to shake them and say, “you do know we’re making this all up, don’t you?”

It’s interesting that people will express surprise, skepticism, or disbelief when they encounter a gamer who has exchanged virtual goods for real world money to purchase and consume actual physical food. “People actually will pay you for something that’s not even real?”

Why are they so shocked? People on Wall Street have known this for decades.

steroids.png