I know I seem to harp on this subject. However, it is something that has been on my mind pretty much all semester.
At the beginning of the semester I viewed online interactions as bad, and here’s why: I lose some inhibitions. In person, I hate confrontation . . . so when I need to confront someone, or if I just need to say something not complimentary, I hedge. And hedge. And hedge. And I try to word it in the nicest way possible, but then it probably doesn’t come out clearly. But online, with that face-to-face interaction out of the way, I don’t tend to hedge (at least, not as much). This has gotten me in trouble at times, when I’ve dashed off a nasty comment on a Facebook post, or snarkily responded to someone in some other version of an online conversation.
But those interactions aside, I’m actually beginning to see the benefit of online conversation for someone with a personality like mine. I’m more assertive online. I’m also more honest with friends who send me writing samples or grant proposals for editing and/or review (though I believe that, for the most part, I am kindly so). I noticed part of this phenomenon back in 2008, when I was reviewing proposals for a federal anti-drug grant program. It was a blind review – I didn’t know who they were, and they didn’t know who I was. So I was honest. Fair, but honest. I pointed out the flaws in their proposals, praised what deserved to be praised, and gave recommendations to the program director for who should be awarded the grants. At the time I was simply surprised at my own professionalism. But I never really fully put it together (until recently) that this anonymity gave me boldness that I don’t otherwise have.
So, the anonymity online gives me confidence…online. I have been trying to think of how to translate that into “real life.” (Why hello, there, Turkle! Didn’t see you there…) At first, I placed the blame on others, annoyed by the fact that we often don’t respect each other’s area of expertise. And this is true. However, I can’t blame it all on them; I have to accept the fact that my own hesitancy (and, indeed, lack of confidence) is undermining my own authority. Online, I can say, “This is my experience. It has taught me ___. And from that, I can tell you ___.” And if I word it correctly, it will gain me respect, and my ideas will be “heard.” In person, those words (and my ethos) can be negated by my lack of confidence. (Hmmm…sounds like the canon of delivery?)
I haven’t yet figured out how to translate this online experience into the rest of my life, but I can at least say that I’m viewing online interactions a little more positively now. Because if I don’t let myself get carried away, these online interactions might just help build me a spine.
Those who keep up with theological debates have probably heard about John Piper’s recent claim that reading commentaries written by women is okay because, basically, she isn’t standing in front of you. For those unfamiliar with the context in which he is making this statement, allow me to fill you in a bit . . .
Though there are people all over the continuum on the role of women in the church, they tend to sort into two primary categories: those who think women are permitted to teach in the church, and those who believe it is “unbiblical.” Like I said, there are people all over the continuum on this, so I don’t mean to create a false binary; however, the “culture wars” within Christianity tend to set up those two options as the sides from which to choose.
John Piper – a well known pastor and author – falls into the camp of those who believe women should not teach in the church. In a recent podcast, however, he states that it is permissible to read women’s commentaries on the Bible as long as the man doesn’t “feel” that he is under the authority of the woman who wrote the commentary. The woman “teaching” in this context is okay because she is not directly in front of the reader (“she’s not looking at me…”). This “phenomenon” of writing, he claims, “takes away the dimension of her female personhood.”
This is a rather problematic view.
Several bloggers/writers/commentators have already chimed in on this, so I won’t rehash a lot of those issues. Rachel Pietka – a graduate student I would very much like to get to know – discusses these ideas in light of the history of the woman’s body, referring to Lindal Buchanan’s book Regendering Delivery, which discusses how women’s thoughts were historically more acceptable long before their bodies were. For more discussion about that, I recommend reading her article. In this post, want to talk about the implications of this idea for online writing.
I am actually wondering if the internet is, in some way, a gender equalizer. Though you can often detect someone’s gender by her user name, avatar, or profile picture, the fact remains that her body is not in front of you. A portion or picture of her body, perhaps…but that bodily presence is still removed. Is it possible that some people with gender biases (because, I can admit, it might go the other way as well) might suspend some of those biases if the body is not present?
I have had a few interactions with men where the in-person interaction was quite contentious, but interactions via email were perfectly polite. I hadn’t really thought about those interactions as different in those two realms until I read Pietka’s post. This could be coincidence, of course, but the vast difference between these two types of interactions makes me wonder: is it this “removal of female personhood” that makes the difference?
What do you think? Have you noticed anything along these lines with your own online interactions?
Attention, Digital Self classmates!
Here is a helpful infographic for who uses social media, and which platforms are more appealing to different demographics, in case any of you could use this data for your projects. This might also be helpful for our conversations surrounding Sherry Turkle’s book this evening.
One thing I do find missing from this graphic, and from Tupfekci’s article sort of refuting Turkle’s claims, is a discussion of how social media is used by people in poverty. This graphic doesn’t include anyone from incomes below $30,000 per year. Tupfekci touched on it briefly when he talked about how reliance on geographic proximity is lessened with the availability of online connections, but only to briefly mention that they might be at a disadvantage when they are not connected. With all of the progress made with technology, is it possible that we are widening the gap between the “haves” and the “have-nots?”
Ricky Gervais was on The Daily Show last week, and he discussed his YouTube channel, as well as Twitter. He argues that YouTube is the next step, as TV is dead. Also, he has some funny (and slightly insulting) commentary regarding “idiots” on Twitter (starting at 4:30).
It’s not a very long interview, and I think it’s worth the watch and some pondering…
Even with a WordPress theme, I feel like I’m in over my head.
As you can see, there’s just way too much SPACE. This is a tiny little clip, but still…it’s just really spread out. I thought it looked weird, so I tried a few other themes and settled (for now) on Pinboard:
This, too, isn’t exactly my first choice for a web design…but I like it better. I’m considering changing the background to a different color so it doesn’t seem so…I don’t know, stark?
The problem, it seems, is lack of visuals. I’m not a visual design person, and my work doesn’t lend itself well to visuals. So I would have to use somewhat unrelated visuals, I suppose. It seems really boring without something. But what? What can I use that’s not created by my work?
I’m open to suggestions, y’all. Kinda feeling stuck here.
Also: I can’t figure out how to add any little widgets or whatever they’re called. Hopefully that problem will be solved in class tonight.
Y’all, building a website is HARD.
I find that it extremely interesting in the phenomenon of gendering technology. The best example I can think of is the iPhone’s Siri. A phone is of course an inanimate object, but I believe many of us think of our phone, especially when utilizing the application of Siri, as female. When someone asks me if I know how to get somewhere or if I need directions, I’ll sometimes say something like, “If I don’t know, Siri will know, she’ll get me there.” Siri also makes our phones not only gendered, but also more like a person. The voice doesn’t belong to a static robot.
But I should clarify that Siri is female in the United States, Australia and Germany, but is male is the U.K. and France. I find it extremely interesting at the cultural research done in each country and culture in order to determine which voice would be most appropriate for each one. Previously, I had only assumed that all iPhones came with a female Siri. This article from Tuaw gives some possible explanations for why our phones in the United States are female.
Some of what they talk about is that previous research has been shown that people just like more female voices than males. Apparently this is something that starts in our mother’s womb as Sande in the article reports. It can also go back to the fact that many Americans are used to getting telephone assistance from females, because back in the day telephone operators were traditionally women.
Which I can’t help but to question this residual tradition still being perpetuated in our technology as women being the place of assistance. Jobs such as telephone operators and secretaries, although they showed the beginning of women moving from the private sphere to the public sphere, still placed them in aiding and abiding men. So my question then is to ask, how far have we really come in American culture if subconsciously still prefer to here a woman’s voice in the role of assistant?
Perhaps I am reading too much into it. But there is a reason that Apple chose to put a male’s voice in some countries and a female’s voice in others, and I cannot help but to ask why? It seems that eventually, just like GPS voice directions (which are also set to a standard female), we will be able to change Siri’s voice in to suite our own gender and accent biases. Not that I am saying by being able to change a robot’s voice from female to male that it will culturally change anything. What I am saying that we should look at our cultural preferences and biases as a way to read between the lines of our own culture.
I’m in that age group that got caught in the middle of all the “tech stuff” and the internet boom. It seems like about half of us kept up with it avidly – these folks learned how to code and are extremely tech savvy. The other half – the half I’m in – either ignored most of it or pretty much keep up with whatever they absolutely have to in order to get by, or to do their job. So here I am again, trying to keep up, by learning how to build my own website. Now, it’s not all that much of a great movement into tech savvy-ness, as I’m using a WordPress theme for my website, and so I’m not entering in a bunch of code in order to build something from scratch. But still – it’s a step, and it’s kinda scary and intimidating.
So, I thought I’d sorta chronicle my experience doing this, as a novice attempting to try a new sort of technology for the first time. Here’s what I have done so far:
(1) Purchased a domain name. Our professor for our Digital Self class, Quinn Warnick, told us of a cheap deal for doing this, so I thought – what the heck? I really should have one by the time I go on the job market, so why not start building it now, and keep adding to it throughout the years?
(2) Installed a WordPress theme. Not sure if I will stick with it, but it’s something to start with, and it’s clean-looking.
(3) Installed a Twitter plugin. Have no idea what to do with it yet, though. Can’t figure that out.
(4) Started building some pages. So far I have a bio, and my CV – which I cut and pasted, and so I have to do some tweaking.
Emotions on this so far: mixed. I keep coming up against things that are confusing, but then other parts are rather intuitive. It’s a mixed bag. One thing I am: surprised. When editing the CV, I actually found it easier to do in HTML. We’ve been learning a bit of this in class, and I was actually able to sort of understand it and fix a few formatting problems that way instead of with the “visual” box. I know, I’m as shocked as you are.
More adventures in website building to come!
My spouse, Andy, used to always have a lot of old music-oriented magazines lying around, because he worked for Guitar Center back in Ohio. The magazines were primarily focused on recording, live audio, djing, and the like. Most of them didn’t make the move to Virginia, but a few weeks ago I picked up one that did, an issue of UK-based Computer Music from last April (2012), and read “Get Real!”, an article about “programming your own totally convincing ‘live’ drum tracks.” The article includes screen shots of midi controls and descriptions of the drum fundamentals as played by a human drummer, suggestions for how to make the drum tracks sound realistically dynamic, etc. The article ends with step-by-step instructions for six different “drum rudiments.”
Pull-out quote from the article: “No drummer can hit more than two drums with their sticks at any one time.” [sic? Is that "their" okay in British English? Personally, I don't mind it--English is a ridiculously clunky language when it comes to grammar.]
So, I’ve been thinking about this article lately when I hear electronic (dance) music, and I’ve realized that organic or realistic sounding drum tracks are part of what I hear in music that I tend to think is good. It was a bit of a revelation, because I have trouble explaining sometimes specifically how I discern what I think is good EDM from crap. I am trying to decide now if that’s always the case for me, or if I do like some music that isn’t realistic.
This raises a question and one more comment.
First, is it problematic to expect a digital medium to represent or emulate an organic one? Does this desire extend to other technologies?
Second, I’m interested in learning how to create music; I’ve been learning how to edit music and do a little bit of remixing for the last couple of years, which I mentioned in a previous post about my workflow. Ultimately, I’m realizing, my most sincere interests in the digital lie in the realm of multimedia–I want to design, create, record, & edit audio, visual, and multimodal work–music, podcast, video.
I started a project the other day with one tiny piece, using the most basic technologies at my disposal: after using PhotoBooth on my MacBook (really basic, I know) to film myself messing around with a song at the very beginning stages of choreographing it, before I have even decided how to edit the song. I was wearing headphones, so the video has no sound. I used iMovie to edit a short video of my favorite part, attempting to sync up the audio to my dancing. It’s a very early part of a work-in-progress. And the sound isn’t really synced. I even see my body in the video at an early stage in a work-in-progress. I wanted to have a continuing record of my progress. The composing process.
A few weeks ago, I read the introduction to Howard Rheingold’s The Virtual Community and compiled some interesting quotes in a draft blog. Today, I revisit those quotes and comment on them.
“The odds are always good that big power and big money will find a way to control access to virtual communities; big power and big money always found ways to control new communications media when they emerged in the past. The Net is still out of control in fundamental ways, but it might not stay that way for long.”
So true. These days, it is clear that “big power and big money” are trying to cease control of something that they cannot understand, and have been doing so since Rheingold first experimented with WELL. In particular, attempts to control how ideas are shared online seem desperate and impotent. As Mr. Universe says, “You can’t stop the signal.” As much of a technophobe as I seem to be, I am in the camp of folks who think that technologies grow and change organically through use (hey, like genres and other tools, right?).
“Although spatial imagery and a sense of place help convey the experience of dwelling in a virtual community, biological imagery is often more appropriate to describe the way cyberculture changes. In terms of the way the whole system is propagating and evolving, think of cyberspace as a social petri dish, the Net as the agar medium, and virtual communities, in all their diversity, as the colonies of microorganisms that grow in petri dishes. Each of the small colonies of microorganisms–the communities on the Net–is a social experiment that nobody planned but that is happening nevertheless.”
Hey, didn’t I just say something about “growing and changing organically”?
“Panopticon was the name for an ultimately effective prison, seriously proposed in eighteenth-century Britain by Jeremy Bentham. A combination of architecture and optics makes it possible in Bentham’s scheme for a single guard to see every prisoner, and for no prisoner to see anything else; the effect is that all prisoners act as if they were under surveillance at all times. Contemporary social critic Michel Foucault, in Discipline and Punish, claimed that the machinery of the worldwide communications network constitutes a kind of camouflaged Panopticon; citizens of the world brought into their homes, along with each other, the prying ears of the state. The cables that bring information into our homes today are technically capable of bringing information out of our homes, instantly transmitted to interested others. Tomorrow’s version of Panoptic machinery could make very effective use of the same communications infrastructure that enables one-room schoolhouses in Montana to communicate with MIT professors, and enables citizens to disseminate news and organize resistance to totalitarian rule. With so much of our intimate data and more and more of our private behavior moving into cyberspace, the potential for totalitarian abuse of that information web is significant and the cautions of the critics are worth a careful hearing.”
Right, so this is one of my other concerns about big power and big money: this Orwellian sense that we’re all being convinced to watch each other and suspect each other, monitoring, not for our own safety, but truly on behalf of something big and sinister. (My tendencies toward reading conspiracy theory through the believing lens showing much here?)
“We can’t do this solely as dispassionate observers, although there is certainly a strong need for the detached assessment of social science. Community is a matter of emotions as well as a thing of reason and data. Some of the most important learning will always have to be done by jumping into one corner or another of cyberspace, living there, and getting up to your elbows in the problems that virtual communities face.”
That’s interesting–although I would lean toward learning about virtual communities through ethnographic research rather than research that looks like social psychology. I’m just not sold on the idea that we learn so much through numbers and “detached assessment.” (Sorry.)
“Technical bridges are connecting the grassroots part of the network with the military-industrial parts of the network. The programmers who built the Net in the first place, the scholars who have been using it to exchange knowledge, the scientists who have been using it for research, are being joined by all those hobbyists with their bedroom and garage BBSs.”
It’s really interesting to see the ways in which these sometimes conflicting efforts are reflected in the Internet we know today.
Perhaps it’s the romantic English major, maybe it’s just my own personal nostalgia, but the idea of the extinct paper book scares me. This fear is only heightened by the fact, that I am getting more and more used to digital books, and finding them, in some cases, more helpful. My academic purposes, having things digital, makes it just so much easier to look things up if I can’t remember exactly where I read something. The thing is that wasn’t even something that I knew I had a problem with, until the technology came along. In the history of technologies, it seems that many times we don’t even realize that we need something, or that something could be easier, until something comes along that does it, or makes our lives simpler. But I have to ask to what cost?
As with everything else in life, there are gains in loses that come with technology. Those gains and loses will be different for person to person. For me, as I have stated, my gains are simplicity of research, but also, portability and access. However, I feel the loss of an art. And not just the art of the book itself, the cover art, typography, the putting together of an artifact by a whole processes of craft, but the lose of art that a person can create with it. What makes art, art, is the ability for people to find experience within it. What makes a book come alive for me as a piece of art is not only the content written within it, but the experience of the experience of a book. Or not just what is intangible, but the tangibleness of a book, what it does to your senses. The smell of a book, the smell of the ink, the smell of old pages, the smell of new pages, the way your body curves around a book reacting to its shape, length, weight, being able to actually touch the pages. All of these things create ownership of a book, it makes it yours, you have given yourself over to it, and it has given itself over to you.
That’s why I support that ebook are fine, sometimes, like fast food they are easier to grab and get what we want quicker. However, but in order to maintain a healthy literacy diet, like the recommendation that half of your grains should be whole, I say that half of your books should be printed. Let’s keep the tradition alive. I don’t know want to live in a world where my grandchildren don’t know what the hell a book is.
And one last point, nothing decorates a house more beautifully than filled bookshelves.
Just got done reading this great post about how to deal with distractions. The post mostly revolves around one professor’s course that deals with handling the multiple online distractions we have. Interestingly, as if to prove the discussion point here, while I was reading this post and then trying to link up to my blog to post about it, I went through a bunch of different emails, and jumped around between the five different tabs I have open on my browser. <sigh>
Most of us have heard by now the argument that the internet (and other related things such as smart phones, iPads, etc) can destroy our concentration. (If you are unfamiliar with this conversation, check out Nicholas Carr’s article “Is Google Making Us Stupid?“) Interestingly, I hadn’t heard the argument on the other side of this debate until reading this article. Regardless, though, this professor (David M. Levy of University of Washington) is implementing techniques with his class – such as meditation and concentrated bursts of activity – to aid concentration. ”So many of those debates fail to even acknowledge or realize that we can educate ourselves, even in the digital era, to be more attentive,” he says. “What’s crucial is education.”
If you’re feeling distracted and/or fragmented by all of these technological interventions, I highly recommend reading this article. Whichever side of the debate you fall on, Dr. Levy provides some really helpful tips for improving concentration – whether or not it has been destroyed by Google.
This coming week will be a little sad for my family, as it will be the first time in 5 or 6 years that we have not celebrated Passover with our close-knit community in Toledo, Ohio. Each year, either we would participate in a large seder at a local church, or we would do our “lighter” version – adapted from the 30 Minute Seder. (FYI: It’s not really 30 minutes. It’s closer to an hour – but that’s still WAY shorter than most.)
We have been debating this year whether we should attend the local seder (which will likely be the three-hour version, and at which we know no one), do our own mini-version at home, or just skip it this year. As I was searching around a bit for thoughts on the subject, I saw this report on crowd-sourced haggadot. This NPR segment reports on this website, which has gathered over 2,000 haggadot from different families. You can choose a full haggadah from the contributions, or sort of mash-up different pieces contributed by others.
Kind of a neat example of crowd-sourcing. Also, if you’re interested at all in Jewish traditions or Passover in particular, there are some really interesting contributions people have made, including blessings for the various parts of the meal that are focused on one thing or another (social justice, for example).
Take a look!
Digital Self classmates -
Interesting movie coming out that revolves around many of the issues we have discussed in class:
It comes out in April. Class trip? ;-P
This is an interesting look at MOOCs… Taking these thoughts to heart, I have to agree, and question whether it is good at all to make a composition MOOC. Seems like on of this subjects that would not work well.
What do you think?
The biggest lesson of graduate school for me? You gotta come to terms with how much you don’t know.
You gotta get to a Zen sorta place where that knowledge is a given: there’s way more in the world, in your field, than you even know to ask about. So have another beer and relax, ok?
Or, uh. Try to.
To that end:
I spent this past Thursday at a day-long symposium at Drexel University called Life Online: The Ethics and Methods of Conducting Research in a Digital Age.
Yeah, it was spring break this week. And yeah, I spent it learnin.’ Though I look at it as gathering arrows for my dissertation quiver. Because sooner rather than later, I’m gonna have to start doing research rather than just talking about it [ahem], so I say: load ‘em up.
A few arrows I came away with:
Of immediate interest for wee young researchers like me is this chart from the Association of Internet Researchers (AoIR). Said chart sketches the different types of data that a researcher might collect online, the venues in which that data might be collected, and the concomitant ethical questions that a researcher might then consider. Interested parties may also find AoIR’s comprehensive Ethical Decision-Making and Internet Research (2012) useful in generating a set of vocabulary for talking about and planning online research projects.
For me, the most useful part of the day was Mary L Gray‘s presentation on IRBs and the difficulty some have in dealing with what she calls “ethnographically-engaged” digital media research.
Dude, Dr. Gray was nine kinds of awesome: amazing research, super-smart as hell, and a great speaker. She was talking about Institutional Review Boards (IRBs), for gods’ sake–snore–but she had the whole room with her from go.
You better believe it.
Somehow, Dr. Gray was cut off for time when other speakers were not and we lost a good 20 minutes of her talk. That was—unfortunate. Especially since some later speakers had time left over. Ah well.
For me, here were the key takeaways from her presentation.
General concepts/questions re: digital research:
- Websites are both texts AND sites; digital media are both a tool AND a location.
- Online research regenerates the question: what constitutes a public space?
- There are no unembodied moments online; the body is always present.
- “The notion of privacy is a privilege,” which—
—holy crap!! One of those things that sounds so obvious and yet, damn.
Central questions re: ethics of online research:
1. Ethical dilemmas are an index of methodological flux/growth in fields of inquiry. Such dilemmas can be generative and productive and we shouldn’t shy away from engaging with them directly.
2. Ethics in online research are ad hoc and (re)constructed: they evolve over time, over the life of a project, and researchers must attend to this evolution.
3. Online researchers should talk through the ethics of a particular project with a trusted colleague, peer, or professor.
AMEN! Especially when your advisor’s own research is generating simliar questions.
4. Gaining IRB approval doesn’t signal the resolution of ethical issues around a project. Indeed, Gray argued that the setup of many IRB forms and procedures can obscure, rather than shed light on, ethical questions that can spring up around digital research.
5. Those who study worlds online should not let the computer screen become the sole terministic screen through which they study a given population or community. Gray emphasized the importance of talking to the people whose activities you see online; there’s much that’s lost without pushing into the broader context within which the user’s digital engagement sits.
This last one really got to me, especially because Gray was pretty damn convincing on this point. But such in-world examinations work directly against both my own instincts (eek! people!) and my sense of the “norm” in rhetorically-inclined digital research. Goddamn it. Because of course, the resistant aura that in-world engagement holds in this context is like catnip to me, man.
All in all, I came out with more questions and angst than answers, and that, for me? Is the sign of a day well spent.
I thought this was a really clever example of how a teacher can creatively use technology in the classroom. Enjoy!
One of our assignments in the Digital Self course I’m taking is to analyze our own online identity. You know, the professional one.
And though I made a big deal yesterday of just how cool I am with being read in different ways–in opening myself up to online interpretation–it was a wee bit scary, trolling through my profiles on various sites, kinda like–
And the whole time, I’m thinking: Hmmm–how might someone interpret what they see here, out of context and with no other knowledge of me?
The whole exercise reminded me of this askbox meme on tumblr:
It’s one of those memes I like to ask but to which I never have a good answer when someone else does the same.
I’d like to think I have no shame. Once you’ve said the phrase “riding the gay incest train” in an academic presentation, there aren’t many places left to hide. But I’m curious as to what a Potential Hiring Person would say if all they knew of me was what they read here or saw over on tumblr.
Ain’t gonna stop me from reblogging pictures of the Overlord’s belt buckles, you know. But it’s hard now–er, um–thanks in part to this class, not to wonder.
There’s something about the idea of performativity, about the capacity to reenact different versions of one’s self depending upon the demands (and opportunities) presented by a given situation, that freaks people out sometimes, because–
That is, I think many people believe that they possess a “true” self, an inner rock of being that is distinctly, unequivocally their own.
But to me, the notion of a One True Anything–much less a One True Self–is frankly terrifying.
Maybe it’s the Gorgias lover in me [yes], or the postmodernist [yup], or the wanderlust, but for me, everything is situational.
It’s like Zora Neale Hurston says in her autobiography, Dust Tracks on a Road: “Nothing that God ever made is the same thing to more than one person. That is natural. There is no single face in nature, because every eye that looks upon it, see it from its own angle” (45).
The cynical version of this would be it’s all relative, but that’s not quite what I mean.
I’d say: it’s all kairotic.
Toss these questions online, these musings over identity and performance, and whoa.
Who am I online? Given that there are different versions of me running around on tumblr, on twitter, on AO3, here on this blog: what control do I have over the answer to that question?
Well, as Corrine Weisgerber notes, the answer may be not a damn lot:
Interpersonal communication research tells us that on social networking sites the people in our network actually co-construct our identities.
Weisgerber uses the metaphor of the multiverse to illustrate this effect: in essence, we’re read repeatedly and from an enormous variety of perspectives by those whom we encounter online. Each of these readers constructs a slightly different version of our “online self” based both on information about us they’ve collected [and we've shared] and on the particular screen through which we’re read.
Thus, who we are online is not one being, rather a constantly evolving set of selves whose outlines we may sketch through the information that we provide but whose features and characteristics are ultimately created by each of our readers.
It’s like online: I’m Batman.
And I’m Batman from Earth 2 (who gets to marry Selina):
The Batman of Zur-En-Ahh, as re-conceived by Grant Morrison, is a man for whom all of Batman’s other selves are real. He’s the crazed, embodied Bat whose head is drowning meta and experiences and identities that his body, his brain, can’t hope to contain.
So I am all of these selves, all at once. Along with many more versions of myself that live in the minds of my readers, as it were, that I’ll never meet, much less name.
I find this vaguely comforting, this kind of being and not-being.
As a writer, I’m always quick to dismiss the notion of authorial intent: to my mind, once I write something and put it into the world, it’s the reader’s to play with, not mine to control. I can give it all my attention in the crafting, the furious typing, the inevitable swearing. But once it’s published, out there in the ether for anyone to see, it’s not mine anymore.
Certainly, the stakes are higher in academia than in fan fiction: interpretations of my Professional Online Selves may help or hinder my ability to feed myself, for example. But the multiverse metaphor is freeing for me because it underscores the limits of control I have over this particular form of text, of Online Me(s). It doesn’t give me a free pass to openly not give a shit about what I post or who I speak to or what I choose to say–though that’s tempting, believe me–but it does take the pressure off a bit.
Because let’s face it: I think of myself of the Batman of Zurr-En-Arrh, but most folks have no idea who he is. And I’m ok with that.
As sort of a follow-up to my post about Hate Online, I decided to investigate an article provided by one of my professors (Digital Self classmates – it is on the sidebar of our class site). In this blog post, Barry Ritholtz details the reasons he is turning off blog comments (and links to why some others have as well). My two favorites:
1.) Were I to shut down my comments, it would be for a reason I have not seen enumerated elsewhere: The intellectually disingenuous rhetorical sleight of hand that has become a substitute for legitimate debate. (I love this sentence so much)
2.) Therein lay the problem: A small group of trolls somehow confuse these sites for a town square. It is not. This blog is not a forum where I am obligated to give equal time to every crackpot conspiracy theorist, birther or intellectually lazy wanker out there. To be blunt, I don’t give a flying fuck at a rolling donut about these jackhole’s opinions. These folk need to rapidly disabuse themselves from believing other people’s blog’s are an open invitation for whatever ignorance or ill thought out nonsense they are peddling.
Now, of course, this is obviously a post written at a high point of frustration, so the language is a bit…extreme at points. And I am not a regular reader of this guy’s blog, so I don’t really know if his claims to reasoned discourse are fair and accurate. But the man has a point. It’s not for nothin’ that my friends and I have a mantra of “never scroll down.” All you have to do is click on the comments section of pretty much any YouTube video to learn that. But I wonder if shutting down comments is the answer. Yeah, I hate it when someone comes trollin’…but isn’t part of the point of a blog the interactive nature of it?
My blog doesn’t have a broad readership, so I haven’t really had to deal with this (though I have deleted a Facebook status or two because I couldn’t believe how my “friends” were commenting on it). But I wonder, if I had this issue – would I block comments? Delete the trolls? Ignore it all and move on?
For the undergraduate business communication course I taught last spring, I required my students to do “THE job correspondence project,” which, from what I have seen, is a standard: find a job posting (I strongly encouraged them to apply for jobs or internships that they really needed); revise the cover letter and resume and tailor them to the job; write a short rhetorical analysis of their documents in light of their audience.
I threw a couple of wrenches into the works, though: We read and discussed scholarly articles on resume rhetoric (I found these two, by Randall L. Popken, of particular interest), and I required a trip to Career Services for conferences. Each student wrote a short report on the visit, in which s/he observed and analyzed the discourse used to talk about the resume and cover letter. In their reports, of course, the students noted the going metaphor from these conversations, which identified them as commodities: “sell yourself,” “marketable skills,” etc. By extension, many of the folks my students met with advised that they control their online identities or “brands.”
Many (most) of my students did not find this language problematic. While I agree that it might not be in one’s best interest to allow one’s online identity to run free and willy-nilly, especially while “on the market” (there’s another one), I do find the commodity/brand trope itself uncomfortable, even offensive.
I recently read an old class blog post by a professor at St. Edward’s University who agrees with me (or, at least she did then). Here’s an excerpt:
I argue that being too concerned with branding restricts the self. Just take a look at U.S. leaders who conflate themselves to the ideology of the party even when it’s clear their own beliefs are far more diverse and subtle. This has lead us to distrust elected officials as we see them as merely parroting talking points. Now compare that to a person like Steve Jobs. Jobs refused to be branded. He was not Apple. He was not Next, or Pixar. He was a unique self, full of contradictions and that’s what humanized him. That’s why we saw the outpouring of support when news of his death spread across the Internet. (Corrinne Weisgerber, from “Negotiating Multiple Identities on the Social Web”, Nov. 16, 2011)
I appreciate, and agree with, Weisgerber’s assessment of the concept of self-branding. I like that she uses Steve Jobs as an example. I particularly like that, shortly after the paragraph I quote above, she uses the Heisenberg principle and the idea of multiverses to explain her thoughts, too. (I’ve snagged her links. Really, if you’re reading this post, and you’re not someone who already read hers, you should go read it.)
In her own, broader context, Weisgerber is focused on the multiple identities that we perform both away from the screen and online. What should be clear from the passage I have quoted is that these identities are different aspects of the same complex person (that’s where particles and verses come in). The concept of branding reduces us to one “sellable” identity. As I’ve noted, the commodification aspect presents a problem to me that is layered onto the problem of dimension discussed by Weisgerber.
Interestingly, one concern my students raised was with how Naomi Klein describes (corporate) branding. They were surprised at the research that goes into branding for specific “types” of people, and even found it creepy that their behaviors were aggregated to identify them as likely to consume specific beverages, wear specific brands, etc. . . .
I am starting to wonder if developing a personal online brand simply makes one easier to identify as a consistent “type” receptive to other(s’) branding practices by aggregators of the data from all of our online identities and their practices. If that’s the case, I feel even more convinced to remain a human and avoid being a branded commodity.