Move to the Dark Side

I have officially moved my blog away from blogs.lt.vt.edu to my own website: hazyblue.me, which eventually will host not only my blog, but also my teaching philosophy, CV and other professional- related tidbits. I hope that everyone will follow me over to the dark-side as I continue to write about education, technology and talking cows. I would especially like to hear feedback on my latest post (inspired by the likes of Janet Murray and Alfred Whitehead), but since I haven’t set up a commenting system yet, please respond via your own blog, a tweet, or an email directly to me!

ToDo: Make ECE2524 Obsolete

Why would I want to eliminate the course that I’ve been teaching the past four semesters, that I have put so many hours into to update content, create new assignments, written (and re-written each semester… another topic altogether) a set of scripts to facilitate reviewing stacks of programming assignments and generally had a great time with?

Well, because I don’t think it should be a separate course to begin with. As many have noted, and I have agreed, ECE2524 in many respects is a kind of a “catch-all” course for all those really important topics and tools (version control, anyone?) that just don’t get covered anywhere else. It is also officially (but not rigorously enforced in the form of a prereq) to be an introduction to more advanced software engineering courses, so it has the general feel of a programming course.

I think programming (and *nix OS usage and philosophy) is too important to delegate off to a 2 credit course and treat separately from the rest of the engineering curriculum, an idea that was solidified after reading an excerpt from Mindstorms by Seymour Papert.

I began to see how children who had learned to program computers could use very concrete computer models to think about thinking and to learn about learning and in doing so, enhance their powers as psychologists and as epistemologists.

Papert is a strong advocate to introducing computer programming to children at an early age and using it as a tool to learn other disciplines

The metaphor of computer as mathematics-speaking entity puts the learner in a qualitatively new kind of relationship to an important domain of knowledge. Even the best of educational television is limited to offering quantitative improvements in the kinds of learning that existed without it… By contrast, when a child learns to program, the process of learning is transformed. It becomes more active and self-directed. In particular, the knowledge is acquired for a recognizable personal purpose.

It goes without saying that a solid understanding of math is crucial for any of the STEM fields, but computers and programming can also encourage engagement with other fields as well, though that is not the focus of this post.

Along with being a useful skill to have, programming teaches a systematic way of thinking about a problem, and crucially shifts the model of learning from one that embodies a “got it” and “got it wrong” binary state to one that encourages the question “how do I fix it?”. As Papert notes, and I can personally attest, when writing a program you never get it right the first time. Becoming a good programmer means becoming an expert at tracking down and fixing bugs.

If this way of looking at intellectual products were generalized to how the larger culture thinks about knowledge and its acquisition, we all might be less intimidated by our fears of “being wrong.”

Some strong arguments for the symbiosis of programming and learning valuable thinking skills at an early age. But the benefits don’t disappear at the college level, especially in a field such as engineering in which learning programming for the sake of programming is a valuable skill (there are several required classes on the subject, so you know it must be important. Slight sarcasm, but it’s true, regardless of how cynical we agree to be about the way classes are structured and the curriculum is built for us). If programming can help engage with learning mathematics, and as a side effect get us thinking about how we think, and shift our view of learning to a more constructive one, then can’t we get at least the same positive affects if we apply it to more advanced concepts and ideas? It doesn’t hurt that a good chunk of engineering is mathematics anyway.

The wheels really started turning after the first day of guest-lecturing for Signals & Systems. Here’s a course that is a lot of math, but critically foundational for learning how to learn about how the world works. That may seem a little embellished, especially for those not familiar with the field (Signals & Systems crash course: a system is anything that has an input signal and produces and output signal, e.g. a car (input is gas/break, output is speed), a heart beat (input is electrical signal transmitted along nerves, output is muscle contraction or blood flow), the planet (so many systems, but treating atmospheric concentrations of CO2 and other gases as an input and the average global temperature would be one example of a system we would be interested in studying)). Signals & Systems provides a set of tools for exploring the input/output relationships of… anything.

So why is it taught from a set of slides?

What better way to really engage with and understand the theory than USE it? Now most educational budgets wouldn’t be able to cover the costs if everyone wanted to learn the input/output behavior of their own personal communications satellite, but the beauty of Signals & Systems, and the mathematical representations that it embodies, is that everything can be simulated on a computer. From the velocity of a car, to blood flow caused by a beating heart, to the motion of the planets and beyond.

I envision a Signals & Systems course that is mostly programming. People will argue that the programming aspect of the material is just the “practical implementation”, and while that’s important, the theory is critical. Yes, the theory is what helps us develop a generalized insight into different ways of representing different types of systems and what allows us to do a good deal of design in a simulated environment with greatly reduced risks, especially when say, designing new flight controls for a commercial jet.

But I think the theory can be taught alongside the programming for a much richer experience than is obtained by following a set of slides. You want to understand how the Laplace transform works? What better way than to implement it on a computer. I guarantee you, if you have to write a program that calculates the Laplace transform for an arbitrary input signal, by the time you’re done with the debugging, you’re going to have a pretty good understanding of whats going on, not to mention a slew of other important experiences (how do you solve an integral on a computer anyway?).

Talking about the differences between continuous time systems and discrete time systems is taken to a whole new level when you start trying to simulate a continues-time system on a computer, very much a discrete-time system. How do you even do that? Is it sufficient to just use a really really small time step?

So yes, I think the best case scenario would be one in which ECE2524: Intro to Unix for Engineers is obsolete1. Not because the topics we cover are unimportant, quite the contrary, they are so important that they should be providing a framework for learning engineering.

Footnotes:

1 I’ve focused primarily on the programming aspect of ECE2524 here, but those of you who know me and have taken the course with me know that the Unix philosophy is a big part of it as well. Integrating the programming aspects into other coursework would of course not address that. I’m sure, with a little thought we all can think up a clever way of introducing the *nix philosophy and generally the whole concept of thinking about a philosophy when thinking about engineering, and what that even means, with every other course. Because well, it should be an integral part of everything else we learn.

Both sides of auto-grading argument miss the point

A recent story in the New York Times covers a software program by nonprofit EdX that will soon be available for free to any institution that wants to use it. Using sophisticated machine learning algorithms to train its artificial intelligence, the software will grade essays and short response questions and provide nearly instant feedback. Naturally there are strong supporters for the new software touting it for “freeing professors for other tasks” (like what?). And just as naturally there are strong critics who have formed a group called Professors Against Machine Scoring Essays in High-Stakes Assessment. From the group’s petition:

Let’s face the realities of automatic essay scoring. Computers cannot “read.” They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others.

While criticism is certainly warranted, I find the quote to be somewhat bullish. Can these people really claim that they understand how they are able to read and measure the essentials of effective written communication well enough that they can look at a computer and say with confidence, “that can not do what I am doing, and here’s why”? It very well may be that current AI programs do not have the ability to comprehend written communication to a degree necessary to assign grades, but to make the argument that the software shouldn’t be used because “computers cannot ‘read'”, as if that were a self-evident fact is just poor communication.

Now to be fair, I disagree with the supporters of the software as well.

“There is a huge value in learning with instant feedback,” Dr. Agarwal said. “Students are telling us they learn much better with instant feedback.”

Ok, well, not that part, I agree with that part in principle. But what kind of feedback? Supposedly the software can generate a grade and also comments whether or not the essay was “on topic”. So a student could get instant feedback, which is great, and then edit and modify, which is great, and resubmit, which is also great… and then what? What would they be learning?

I promise to be highly skeptical of any answer to that question that isn’t “how to write an essay that receives high marks from an automatic grading AI”.

All this talk about feedback. What about feedback for the professor? I find reading through 60 essays just as tedious and time consuming as the next out-of-place grad student in a department that doesn’t value teaching, but I also recognize that reading those essays is a valuable way for me to gauge how I’m doing. Are the concepts that I think are important showing up? Are there any major communication issues? What about individuals, are some struggling, what can I do to help? How will I learn my students’ personalities and how that might affect their personal engagement with the material? How will I learn to be a better educator?

Granted, even though 60 feels overwhelming, it’s nowhere near 200 or more. I can’t even imagine trying to read through that many assignments myself. I’m confident that if I were force to I would not emerge with my sanity intact. This problem does not go unaddressed.

With increasingly large classes, it is impossible for most teachers to give students meaningful feedback on writing assignments, he said. Plus, he noted, critics of the technology have tended to come from the nation’s best universities, where the level of pedagogy is much better than at most schools.

“Often they come from very prestigious institutions where, in fact, they do a much better job of providing feedback than a machine ever could,” Dr. Shermis said. “There seems to be a lack of appreciation of what is actually going on in the real world.”

An “A” for recognizing the problem. But the proposed solution is nothing more than a patch. In fact, it’s worse, because it is a tool that will enable the continual ballooning of class size. And to what expense? Why don’t you rethink your solution and have it on my desk in the morning. I can’t promise instant feedback, but maybe, just maybe, the feedback provided will be the start to moving in a direction that actually addresses the underlying problems, rather than just using technology to hide them.

How will we build a Third System of education?

I have recently been reading about, as Mike Gancarz puts it in Linux and the Unix Philosophy, “The Three Systems of Man”. This is, to my understanding, a fairly well documented and often-observed concept in software design, possibly first referenced by Frederick Brooks in The Mythical Man-Month when he coined “the second system effect“. Gancarz seems to take the concept further, generalizing it to any system built by humans.

Man has the capacity to build only three systems. No mater how hard he may try, no matter how many hours, months, or years for which he may struggle, he eventually realizes that he is incapable of anything more. He simply cannot build a fourth. To believe otherwise is self-delusion.

The First System

Fueled by need, constricted by deadlines, a first system is born out of a creative spark. It’s quick, often dirty, but gets the job done well. Importantly it inspires others with the possibilities it opens up. The “what if”s elicited by a First System lead to…

The Second System

Encouraged and inspired by the success of the First System more people want to get on bored and offer their own contributions and add features they deem necessary. Committees are formed to organize and delegate. Everyone offers their expertise and everyone believes they have expertise, even when they don’t. The Second System has a marketing team devoted to selling its many features to eagerly awaiting customers, and to appeal to the widest possible customer base nearly any feature that is thought up is added. In reality, most users end up only using a small fraction of available features of The Second System, the rest just get in the way. Despite enjoying commercial success The Second System is usually the worse of the three. By trying to appease everyone (and more often then not, by not understanding , the committees in charge have created a mediocre experience. The unnecessary features add so much complexity that bugs are many and fixes take a considerable amount of effort. After some time, some users (and developers) start to recognize The Second System for what it is: bloatware.

The Third System

The Third System is built by people who have been burned by the Second System

Eventually enough people grow frustrated by the inefficiencies and bloat of The Second System that they rebel against it. They set out to create a new system that contains the essential features and lessons learned in the First and Second Systems, but leave out the crud that accumulated by the Second System. The construction of a Third System comes about either as a result of observed need, or as an act of rebellion against the Second System. Third Systems challenge the status quo set by Second Systems, and as such there is a natural tendency to those invested in The Second System to criticize, distrust and fear The Third System and those who advocate for it.

The Interesting History of Unix

Progression from First to Second to Third system always happens in that order, but sometimes a Third System can reset back to First, as is the case with Unix. While Gancarz argues that current commercial Unix is a Second System, the original Unix created by a handful of people at Bell Labs was a Third System. It grew out of the Multics project which was the Second System solution spun from the excitement of the Compatible Time-Sharing System (CTSS), arguably the first timesharing system ever deployed. Multics suffered so much from second-system syndrome that it collapsed under its own weight.

Linux is both a Third and Second system: while it shares many properties of commercial Unix that are Second System-like, it is under active development by people who came aboard as rebels of Unix and who put every effort into eliminating the Second System cruft associated with its commercial cousin.

Is our current Educational Complex a Second System?

I see many signs of second-system effect in our current educational system. Designed and controlled by committee, constructed to meed the needs of a large audience while failing to meet the individual needs of many (most?). Solutions to visible problems are also determined by committee and patches to solutions serve to cover up symptoms. Addressing the underlying causes would require asking some very difficult questions about the nature of the system itself. Something that those invested in it are not rushing to do.

Building a Third System

What would a Linux-esq approach to education look like? What are the bits that we would like to keep? What are the ugliest pieces that should be discarded first? And how will we weave it all together into a functional, useful system?

Structure, Language and Art

In a recent post tylera5 commented that the last time he wrote poetry was in high school, and wasn’t expecting to have to write a poem for a programming course. I got the idea for a poetry assignment from a friend of mine who teaches a biological science course. She found that the challenge of condensing a technical topic into a 17 syllable Haiku really forces one to think critically about the subject and filter through all the information to shake out the key concept. And poems about tech topics are just fun to read!

I think the benefit is even increased for a programming course. As tylera5 mentioned, both poems had a structure, and he had to think a bit about how to put his thoughts into the structure dictated by the poetry form, whether it be the 5/7/5 syllable structure of a Haiku, or the AABBA rhyming scheme of a limerick.

Poetry is the expression of ideas and thoughts through structured language (and the structure can play a larger or lesser roll depending on the poet, and type of poetry). Programming also is the expression of ideas and thoughts through structured language. The domain of ideas is often more restricted (though not necessarily, this article and book could be the subject of a full post in its own right) and adherence to structure is more strict, but there is an art to both forms of expression.

Are there artistic and expressive tools in other STEM topics as well?

What Makes Good Software Good?

The first day of class (ECE2524: Introduction to Unix for Engineers) I asked participents the  open ended question “What makes good software good?” and asked them to answer both “for the developer” and “for the consumer”.

I generated a list of words and phrases for each sub-response and then normalized it based on my own intuition (e.g. I changed “simplicity” to “simple”, “easy to use” to “intuitive”, etc.). I then dumped the list into Wordle to generate these images:

Good Software for the Consumer

Good Software for the Consumer

Good Software for the Developer

Good Software for the Developer

For a future in-class exercise I plan to ask participants to link the common themes that appear in these word clouds back to specific rules mentioned in the reading.

A Matter of Standards

Last night on my bike ride home from PFP class I mentally prepared a “todo list” of things to get done in the couple of hours I’d have  before getting too tired to be productive.  In a classic example of the story of my life, all that mental preparation went out the window when I finally arrived home, checked my email (probably mistake #1, checking email wasn’t on the original todo list) and read a message from a student in the class I’m teaching, ECE2524: Introduction to Unix for Engineers.

On the face of it, the question seemed to be a simple one: “how do I display a certain character on the screen?” Furthermore, they noted that when they compiled their program in Windows, it worked fine and displayed the character they wanted, a block symbole: ▊, but when compiling and running on Linux the character displayed as a question mark ‘?’

Now, before you get turned off by words like “compile” and “Linux”, let me assure you, this all has a point and it relates to a discussion we had in PFP about “standards for the Ph.D.” plus, it resulted in one of my favorite methods of procrastination, exploring things we take for granted and discovering why we do things the way we do.

After some googling around I came across this excellent post, from which I pulled many of the examples that I use here.

The problem was one of standards, but before we can talk about that we need to know a little bit about the history of how characters are stored and represented on a computer.  Even if you aren’t a computer engineer you probably know that computers don’t work with letters at all, they work with numbers, and you probably know they work with numbers represented in base 2, or binary, where ’10’ represents ‘2’, ’11’ represents 3, ‘100’ is a ‘4’ and so on.  And if you didn’t know some or any of that, that’s perfectly ok, because you don’t actually need to know how a computer stores and manipulates information in order to use a computer any more, but back in the early days of computing, you did.  Also important for the story, back in the early days of computing the kind of information people needed to represent was much more limited, pictures and graphics of any kind were far beyond the capabilities of hardware used to represent information, in fact, early computer terminals were just glorified typewriters, only capable of representing letters in the Classical Latin alphabet, a-z, A-Z, numbers 0-9 and, because much of the early development was done in the United States, punctuation used in the English language.  To represent these letters with numbers a code had to be developed: a 1 to 1 relationship between a number and a letter.  The code that  came to widespred use, was called American Standard Code for Information Interchange, or ASCII

ASCII chart

This was a nice code for the time, with a total of 128 characters, any once character could be represented with 7 digital bits (2^7 = 128), so for instance 100 0001 in binary, which is 65 in good ol’ base 10, represents upper case ‘A’ while 110 001, or 97 represents lower case ‘a’.  For technical reasons it is convenient to store binary data in chunk bits totaling a power of 2.  7 is not a power of two, but 8 is, and so early computers stored and used information in chunks of 8 bits (today’s modern processors use data in chunks of 32 or 64 bits).

Well, this was all fine and good, we could represent all the printed characters we needed, along with a set of “control” characters that were used for other purposes needed for transmitting data from one location to another.  But soon 128 characters started feeling limited, for one thing, even in English, it is sometimes useful to print accented characters, such as é in résumé.  Well, people noticed that ASCII only used 7 bits, but recognized that information was stored in groups of 8 bits, so there was a whole other bit that could be used.  People got creative and created extended ASCII which assigned symbols to the integer range 128-255 thereby making complete use of all 8 bits, but taking care not to change the meaning of the lower 127 codes, so for instance 130 now was used to represent é.

The problem was that even 255 characters is not enough to represent the richness of all human languages around the world, and so as computer use became more prevalent in other parts of the world the upper 127 codes were used to represent different sets of symbols, for instance computers sold in Israel used 130 to represent the Hebrew letter Gimel (ג) instead of the accented é.  At first, everyone was happy.  People could represent all or most symbols needed for their native language (ignoring for the moment Chinese or Japanese, which have thousands of different symbols, with no hope of fitting in an 8-bit code).

Then the unthinkable happened.  The Internet, and more to the point, email, changed the landscape of character representation, because all of a sudden people were trying to send and receive information to and from different parts of the world.  So now, when an American sent their résumé to their colleague in Isreal is showed up as a rגsumג.  Woops!

But what to do?  At this point there were hundreds of different “code pages” used to represent a set of 255 characters with 8 bits.  While the lower 127 bits remained mostly consistent between code pages, the upper 127 were a bit of a free-for-all.  It became clear that a new standard was needed for representing characters on computers, one that could be used on any computer to represent any printed character of any human language, including ones that did could not easily be represented by only 255 characters.

The solution is called Unicode, and it is a fundamentally different way of thinking about character representation.  In ASCII, and all the code pages developed after that, the relationship between a character and how that character was stored in computer memory was exact (even if different people didn’t agree what that relationship was).  In ASCII, an upper case ‘A’ was stored as 0100 0001, and if you could look at the individual bits physically stored in memory, that is what you would see, end of story.  Unicode relates letters to an abstract concept called a “code point”, a Unicode A is represented as U+0041.  A code point does not tell you anything about how a letter is stored in 1s and 0s, instead U+0041 just means the concept or idea of “upper case A”, likewise U+00E9 means the “lower case accented e”  (é), and U+05D2 means “the Hebrew letter gimel” (ג).  You can find all the Unicode representation for any supported character on the Unicode website, or for quick reference at a variety of online charts, like this one.

But remember, the Unicode representations are associated with the concept of the letter, not how it is stored on a computer.  The relationship between Unicode value and storage value is determined by the encoding scheme, the most common being UTF-8.  A neat property of the UTF-8 encoding is that it is backwards compatible with the lower 127 ASCII characters, and so if those are the only characters you are using they’ll show up just fine in older software that doesn’t know anything about Unicode and assumes everything is in ASCII.

I know I’m risking losing my point at this point, but one last thing.  Right click on this webpage and click “View Page Source”.  Near the top of the page you should see something that looks like

<meta charset=”UTF-8″ />

or

<meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8″ />

This is the line that tells your web browser what encoding scheme is used for the characters on this web page.  But “wait”, you might say, I have a self-reference problem, to write out “charset=UTF-8”, I need to first pick an encoding to use, so how can I tell the web browser what encoding I’m using without assuming it already knows what encoding I’m using?  Well, luckily, all the characters needed to write out the first few header lines, including “charset=UTF-8” just happen to be contained in the lower 127 characters of the original ASCII specification, which is the same as UTF-8 for that small range.  So web browsers can safely assume “UTF-8″ until they read the line <meta charset=”UTF-16” /> at which point they will reload the page and switch to the specified encoding scheme.

Ok. So where the heck was I going with this?  Well for one thing, the history of character representation is quite interesting and highlights various aspects of the history of computing, and sheds light on something that we all take for granted now, that I can open up a web page on any computer and be reasonably sure that the symbols used to represent the characters displayed are what the author intended.

But it also highlights the importance of forming good standards, because without them, it is difficult to communicate across boundaries.  Standards don’t need to specify the details of implementation (how a character is stored in computer memory), but at the very least, to be useful and flexible they need to specify a common map between a specific concept (the letter ‘A’ in the Latin alphabet) and some agreed upon label (U+0041).

Currently, we don’t really have a standardized way of talking about a Ph.D.  What is a “qualifier exam”? “prelims”?, “proposal”? all of these things could mean something different depending on your department and discipline.  While trying to standardize details such as “how many publications” or “how many years” or “what kind of work” would be difficult at best, nonsensical in many cases, to do across disciplines, we could start talking about standardizing the language we use to talk about various parts of the Ph.D process that are similar across fields.

And incidentally, this is why I still haven’t finished grading the stack of homeworks I told myself I’d finish last night.

And for what it’s worth, the answer to the student’s question was to use the Unicode representation of the ▊ symbol, which is standardized, not the extended-ASCII representation, which is not a standard way to represent that symbol.

 

Industrialized Learning: Knowledge != Information

To comment on Dan’s post titled Industrialized Learning?? I agree the the industrialization of the search for knowledge is a scary thing indeed, and in many respects the structure of our education system has suggested a trend in that direction (for more on that, watch this excellent video).  However, I don’t think that is necessarily Google’s goal.  As Carr mentioned, Google’s mission is “to organize the world’s information and make it universally accessible and useful.”  Organizing information, and creating tools to systematically archive and find information is not the same as industrializing the search for knowledge.  In fact, I would argue that the search for knowledge benefits if all the world’s existing information is organized in a systematic way to make it easy for anyone to access it.  Libraries have had a similar goal long before Google came around.

Now certainly, the way our information is organized and the way we search for it does change the way we think about information, I think that was one of the key points in Carr’s article.  However, changing the way we think in and of itself isn’t necessarily a bad thing, and it is in fact a continuous process that has been a reality since we were able to stand upright freeing our hands for tasks other than mobility.

To be clear, I am not suggesting we give Google (or any other organization that deals with our information) carte blanche when it comes to the handling of the world’s information.  However, it is important to keep in mind that while information is a component of knowledge, information alone does not define knowledge.  The fear that the standardization and systematization of tasks will turn humans into mindless automatons is certainly something to think about and there is plenty of evidence from the Industrial Revolution that that is indeed the case.  However, the same economic forces that favor standardizing and systematization of a task also favor replacing a human automaton with a robot designed to complete that task.  Of course, currently we are facing a new problem as a result: how do we employ all these people who’s jobs have been replaced with robots?  We definitely need to have that discussion, but I don’t think the answer is to fight the systematization of tasks and put people back in those jobs, but rather to focus on what we are still better at than any  algorithm: creative thinking and imagination, and the search for knowledge.

The Freedom to be “Technologically Elite”

Also in response to Kim’s recent post.  I think the conversation about access and the issue of digital inclusion is a very important one to have, and we need to continue to be aware of how the tools and technologies we use may include or exclude people.  I would like to talk a bit about the concept of the “technological elite” that Kim brought up.  It’s important to be aware that it is possible for an elite minority to control the tools the majority comes to depend and rely on, but that is not currently the case, and in fact, I would argue that it will only become a reality if the majority allows it to happen.  As Jon Udell mentioned in his conversation the Internet itself has always been and continues to be an inherently distributed technology.  No single organization, whether it be corporate or governmental, owns or controls it.  There have been attempts, and there will continue to be attempts to restrict freedoms and tighten control, like the recent attempted SOPA/PIPA legislation, but it is our responsibility to continue to be aware of those attempts and fight them.

Many popular software tools in use, including WordPress which powers this blog, are free and open source.  This means that anyone can take a look at the source code to learn what is going on behind the scenes, and in many cases, modify and improve that tool for your own or public use.  The language that WordPress is written in, PHP is not only open-source, but there are a plethora of free tutorials online for anyone interested in learning how to program in the language.  The database used by WordPress to store and retrieve content, MySQL is currently open source, though the project itself was originally proprietary (Another relational database management system, PostgreSQL, has been open source for the entirety of its live-time and in many cases can be used as a drop-in replacement for MySQL). The majority of servers powering the internet run some version of the Linux operating system, itself freely available and open source.

Each of these projects, at the various layers the build up to form the tools that we use are generally well documented with enough information freely available to allow anyone who wants to become an expert in their use and design.  Now of course, not everyone will become an expert, and they experts for any one project are not necessarily experts in any other.  But specialization has allowed us to advance as a society in a way that would not be possible without it.

And because I love food:

When many of us began specializing in fields that did not involve agriculture and food production, we became dependent on those who did for our very survival.  Yet I can’t remember the last time I’ve heard anyone call farmers members of the “Agricultural Elite”.  Like the Internet tools I’ve mentioned, any of us have the agency to become experts in farming if we so choose.