Leonard J. Currie Slides

Leonard J. Currie was an architect and educator who studied with Walter Gropius and established the College of Architecture and Urban Studies at Virginia Tech. Visual Resources in the VT Art + Architecture library is scanning his slides and presenting them to the VT community via Artstor’s Shared Shelf. Currently on view are the two houses he designed in Blacksburg (1961 and 1981) for his family and an older house on the Northwest Side of Chicago that he renovated and rehabilitated (1965).

His first house in Blacksburg, familiarly known as the Pagoda house, is a modern design influenced by the work of Frank Lloyd Wright and Bauhaus masters. He designed his family’s second house in Blacksburg, made of brick, after living in the brick house in Chicago. It is more secluded and settled than the “Pagoda House’, perched on a rise, with prominent glass on all four sides.

Posted in Uncategorized

Cambridge is Strange: #UXLibs Reflections

20150320_130857 (2)

Anthropologists in Cambridge

Cambridge is strange. One of the brilliant things about hosting the User Experience in Libraries Conference (UXLibs) last week at St. Catherine’s College was the unique location’s ability to make everyone immediately feel like an anthropologist newly arrived at a fieldsite.  A sign announced the college closed to the public; we were all (with a couple exceptions) strangers here.  Anthropologists often emphasize taking on the role of a professional outsider: a stranger who is a novice in the local culture and must ask questions of everything in order to understand.  This comes more naturally in an unfamiliar place, but takes conscious practice at home.  Figuring out the norms of St. Cats and Cambridge (e.g. Why can’t we walk on that grass?; Who/What exactly is a college fellow?; Punting doesn’t involve a ball?) underscored the ethnographic importance of understanding the particularities of a locality and place.  This theme was central throughout the conference, and Cambridge as a space and place showed through in all of the impressive work and recommendations of the UX project teams.

This should probably come as no surprise; a key tension within ethnographic methods is working out which findings are specific to a fieldsite and which are generalizable.  Everywhere is strange and complicated, but often in similar ways.

As Donna Lanclos pointed out in her keynote, as ethnographic practitioners we need to engage  both in ethnography, which focuses on the describing and explaining the particular, and in ethnology, which compares and synthesizes.  Over the last few years we’ve made significant strides in library ethnography, but library ethnology is still relatively undeveloped.  One of the things I hope will come out of gatherings such as UXLibs is the creation of international communities and networks of practitioners that can start thinking through comparative and theoretical questions about libraries, user experience, and information in higher education.

Building capacity and infrastructure is part of working toward these goals, and this will require support from library administrations.  Embracing an ethnographic approach can be challenging, often precisely because of the strengths of the method:  it seeks to destabilize by questioning fundamental assumptions about the way things work.

Since many UXLibs delegates are probably heading back to the office today, I thought I’d end this post with some talking points for administrators which may be useful in starting these conversations:

  • Ethnography is not expensive, but it’s also not free.  Some money is required for things like incentives to research participants, transcription, and perhaps some specialized software and hardware.  But more importantly, ethnography requires time and maybe quite a lot of time.  This means shifting resources so that people don’t have to take on ethnographic projects over and above their other duties.
  • Qualitative and quantitative approaches are not antagonistic.  Each approach marshals different types of evidence and data.  Both are valid, and both can be representative.
  • Ethnography is a long-term commitment.  It is a practice and a mindset as much as a toolkit of methods.  Ask questions, gather evidence, make changes, repeat.  I like to think in terms of continuous incremental improvement rather than “failing more quickly.”
  • Ethnographic projects need room to ask difficult questions.  Allow everything to be on the table, make crazy sounding assertions, and say things that might seem heretical to librarianship.

If all this sounds scary, that’s OK.  In fact, this is part of the process.  Remember you’re dealing with a discipline that takes dumping someone in a foreign country and leaving her there alone for a year or two as its mythologized rite of passage.  If it isn’t a little bit scary, you probably aren’t challenging yourself enough.

That’s my jetlagged and two cents (two pence?). UXLibs has already had several other excellent summaries from Andy Priestner, Shelly Gullikson, & Ned Potter, that I encourage you to read as well.

Ethical Behavior this Week

The  collection and analysis of user-level library data  was a central theme at the 2014 ARL Library Assessment Conference (LAC).  This is perhaps not surprising; the recent ACRL Value of Academic Libraries Report emphasized the need for libraries to systematically collect and incorporate large-scale data into their assessment activities, and many libraries (including my own) are actively seeking ways to use data about library services and collections in institutional efforts to better understand and measure student learning and engagement.  Each of the three LAC keynote presentations discussed and demonstrated how library data might be utilized in various approaches to educational analytics  in ways that were at once exciting and frightening (unfortunately at the time of writing these presentations are not yet online).  These presentations provoked numerous Twitter discussions, particularly surrounding issues of privacy and the ethics of collecting “big data”  about our users.

My fortune cookie on the way home from the ARL Library Assessment Conference underscores the importance of data ethics.

My fortune cookie on the way home from the ARL Library Assessment Conference underscores the importance of data ethics.

During this discussion, I commented that I felt that there was a somewhat  laissez faire attitude towards user privacy in the LAC presentations, especially with regard to the creation of large systematic datasets.  Some of my colleagues suggested that I was being uncharitable, pointing out that just because privacy protections weren’t discussed during a presentation didn’t necessarily mean that they weren’t in place.  Some of this tension is certainly a result of what Barbara Fister identifies as the two models of research presented to librarians, “the scholar’s way and the corporate way,” and their very different approaches to human subjects research (this tension was also readily apparent in presenters’ choices of models and metaphors throughout the conference).   Nevertheless, it seemed clear to me that our libraries’ technical capabilities have outpaced the sophistication of our ethical conversation surrounding this type of data collection.

In the final keynote, David Kay, a consultant from UK-based Sero Consulting, went so far as to recommend that libraries collect and indefinitely retain  transaction-level usage data from throughout their various systems at the level of individual users.  Just to be clear on what this might look like:  a dataset containing every search, every resource used, every item record viewed, every article browsed or downloaded, and every book checked out by every library user.  This data would also be collected  in such a way that it could be linked to other institutional data such as demographic information, financial aid, measures of student and faculty success (engagement, retention, GPA, grant funding, publication records), and really anything else that a university might track at an individual level.  The creation of this type of dataset is already within our libraries’ technical capacities, and quite a bit of it probably already exists within our server logs. In fact, I’ve had serious discussions about how we might collect this kind of data at my own library (although we have not yet done so).

Before creating these types of datasets, we have an ethical obligation to evaluate their potential risks and benefits to the people whose information they contain.

There are numerous potential benefits to mining large library datasets.  To name a few:  providing better services to our students, faculty and public users, making efficient use of our funds, improving management of our collections, demonstrating the role of libraries in student learning and faculty research, and identifying students at risk of dropping out or in need of additional help.  All of these are important and laudable goals, and I do think that much of the push towards collecting more and more fine-grained educational data is motivated  by a desire to help students (although we should also be very aware of the financial and ideological interests at play).

Despite these potential benefits, difficulties soon arise when we begin to evaluate the potential risks of these types of data.   While they might be initially created for  benign purposes, as I have argued elsewhere, one problem with creating and storing these data –especially if it is stored in perpetuity–is that  it is extremely difficult  to evaluate what it might be used for, or more critically, what it might be misused for.  At least initially, much of the analysis and  insight listed above would require identifiers to be collected that would allow the data to be associated with individuals.  Imagine a datset that contains every topic you ever searched as an undergraduate or graduate student.  Would you want this dataset to be available for data mining and analysis?  Would you trust your university never to sell it or otherwise disclose it?  What if the university decided to provide it to  potential employers as evidence of a student’s “preparedness?”  What if it became part of political proceedings?  Or a civil or criminal suit?

Once a dataset exists it is subject to subpoena  by law enforcement (IRB consent forms usually warn about legally required disclosure).  There is no researcher-subject privilege and we should not assume that our universities will be willing or able to resist a subpoena.  Recent events surrounding the Boston College Belfast Project, in which oral history interviews that were  intended to remain secret until after a research participant’s death were subpoenaed as part of a British Government murder investigation, have amply demonstrated the potential risks of archived datasets.  Given the FBI and other investigative agencies’ previous interest in library data, it is not difficult to imagine myriad scenarios in which transaction data might be requested.  In my opinion, these types of  large datasets therefore  represent a significant risk for unintended use and disclosure.

Removing identifying information from stored datasets is often used as a strategy to mitigate or eliminate risks from unintended disclosure.  Unfortunately, even de-identifying a dataset by destroying the links between individually identifying information and other data is possibly insufficient to protect research subjects’ privacy.   Re-identification of research subjects from information contained in datasets is decreasing in difficulty, a task that would be made even easier in the case of a library user dataset since the source population would be known (i.e. the students enrolled at a university during particular dates).  For this reason, a dataset containing even rudimentary demographic data about students, (such as major, graduation year, sex, ethnicity, etc.), might be impossible to de-identify (I’ve discussed this further in a report on data stewardship practices.  See p. 94).

Because of these unknown risks, obtaining consent is another major problem with this collecting these types of data.   Like most libraries, my library obtains passive consent for our routine user data collection via our library privacy policy.  This covers data collection for the mission of the library, but doesn’t specify if linking to other institutional data falls under this mission (incidentally, our policy also explicitly forbids linking user accounts with specific items and records).  There is also not an effective opt-out procedure to this policy except for users to refrain from using the library systems that require a log-in.   While this is the form of consent most web services use for their data collection (via their Terms of Service Agreements), this would almost certainly be viewed as coercive by an IRB.

I don’t presume to have immediate answers to all these issues, but in the interest of moving the conversation forward, I will suggest a few possible guidelines for consideration:

  1.  Data should be aggregated at a level that balances analytical specificity with user privacy.  For example, electronic usage data might be collected at the resource level rather than the item level, or circulation data at the LC classification level.
  2. Transaction level data that identifies both user and item should be avoided unless required for a specific and limited purpose.  Systematic collection of this data should not be conducted.  If this data is required, additional measures such as local encryption of the files should be used to protect individuals’ privacy.
  3. Datasets containing user demographic data should not be retained indefinitely and should be destroyed after a reasonable period following the completion of data analysis.
  4. Consent procedures should be reviewed before data collection, and procedures to provide opt-out and/or explicit consent should be developed when necessary.
  5. Libraries should hold our vendors to the same data ethics standards that we adhere to.  Moreover, we should not purchase or otherwise use data that does not meet our ethical standards.
  6. The ALA should consider adding statements on research ethics and data ethics to its Code of Ethics.

Unlike Google and other web services, our business model does not require us to turn our users into commodities.   We should continue hold ourselves to a higher standard.

 

Coding Library Cognitive Maps

After Donna Lanclos’s recent post on using my library cognitive mapping method, a few people asked me to briefly write up my approach to coding the drawings.

I developed the cognitive mapping exercise based on the sketch maps protocol used by Kevin Lynch in The Image of the Citywhich was introduced to me by an urban planner I met during my fieldwork on the Polish-German border.  Incidentally, I thought I was the first person to apply this method to libraries, until I ran across Mark Horan’s 1999 article, “What Students See: Sketch Maps as Tools for Assessing Knowledge of Libraries,” which used the same urban planning source materials to develop a very similar approach.

The examples I discuss below are drawn from the ERIAL Project, and the exact instructions I gave students were as follows:

“You will be given 6 minutes to draw from memory a map of the [NAME] Library. Every two minutes you will be asked to change the color of your pen in the following order: 1. Blue, 2. Green, 3. Red. After the six minutes is complete, please label the features on your map. Please try to be as complete as possible, and don’t worry about the quality of the drawing!”

This method assumes that the things people most associate with their “mental map” of the  library will appear as elements in the drawing, and that the most important things (or strongest associations) will appear earlier.  Therefore, by changing the pen colors, this approach creates both a spatial dimension and a temporal dimension.

The mapping activity was conducted away from library building itself both to obtain a diverse cross-section of students (e.g. students who do not regularly use the library) and to obtain a picture of how students conceptualize the library’s spaces that was not influenced by any direct visual references.

We used this protocol at four of the five ERIAL project libraries, but for simplicity, I’ll just use examples from one library.  The floor plan of this particular library looks like this:

 

IWU_Library

Students were allowed an open interpretation of the instructions, which resulted in the wide range of approaches.  For example:

Map_1

map4

 

Coding these images basically involves counting the elements drawn in order to construct two indexes:  a identification index, which is the number of times that an element is drawn divided by the total number of individuals participating (i.e. the percentage of the time the element occurs), and representativeness index, which is the number of times an element is drawn divided by the number of times that category of element is drawn (e.g. the number of times a study room on the first floor is drawn divided by the number of times all study rooms are drawn) (See Colette Cauvin’s “Cognitive and cartographic representations : towards a comprehensive approach” for additional discussion).  I also constructed a temporal index for each element by coding the three colors in order (1 = Blue, 2 = Green, 3 = Red) and calculating the mean value for each element (you could do more complicated things by combining the indexes if you are mathematically inclined, however, I’ve found that these three get at most  questions).   You can set up a spreadsheet in excel to do this coding, or utilize the visual coding built into a QDA software package.   This process can be time consuming as every element must be coded.   You also need to decide which categories you will use (e.g.  “chairs,”  “computers,” “rooms”, etc.).  The presence or absence of all elements need to be coded for for every drawing, so if you find a new element in a later drawing, you need to go back and code for it in all the previous drawings (this is akin to coding against a closed codebook).

This is all fairly straightforward, except that there can be a lot of ambiguity in the drawings and you will have to decide rules for when something “counts” (this is why having students label things helps).

For example, in the drawing below, the computer stations (circled in orange) are clearly labeled, so these might be coded as element = “first floor information commons computers,” category = “computers,” time=1.

Map_1_Detail

In contrast, the following drawing has unlabeled squares and rectangles (circled in orange) where there are tables and periodicals shelving.  In this case, the coder must decide what the element represents.  Since the squares are in the correct position, we coded these as tables, and since the rectangles are the correct shape and in the correct position we coded these as periodical shelves.  This can obviously become complicated, and you will need to decide what rules work for your particular context.

map3

Some high index elements that we identified were reference & circulation desks, computer workstations, and study rooms, while low identification elements were librarian offices, journals, and new books areas.  Importantly, much high traffic library real estate was taken up by low-identification elements.  In this way, the blank spaces of the drawings can also be especially informative.  For example, in the following drawing almost the entire left side of the library is blank space.
map2
This area is where current periodicals are shelved.
While I think this method is extremely informative for researchers, I would also recommend using caution in interpreting the results.  The assumptions about the association between an individual’s conception of  the library and the drawn representation can be questioned, and there are also a variety of potential sampling bias issues in the way sketch map methods are usually collected (e.g. problems stemming from convenience samples).   I therefore recommend utilizing this approach in conjunction with additional interviewing methods that can corroborate and add context to the findings.

 

Funded for Another Year!

We are excited to announce that Virginia Tech Libraries’ Instruction Learning Community has been funded for another year!  We just received the news that we have been awarded an Instructional Enhancement Grant from Center for Instructional Development and Educational Research in the amount of $2,000.

We we will send more information about the Spring 2013 Instruction Learning Community after the Fall 2012 Community wraps up.

Learning Community Meeting, 11/26/12

Today marked the third meeting of the Fall 2012 Instruction Learning Community; at this point, we’ve read seven of the ten chapters in College Libraries and Student Culture:  What We Now Know.  The group decided that if we had to pick one theme or idea that represented the contents of the book, it would be that accessibility is key to any library’s outreach, reference, instruction, and other services.  This may mean building meaningful relationships with students, or simply making the building more comfortable and familiar for the students.  While the group identified a few areas where the schools in the study differed from our campus (Virginia Tech), we are still pulling important ideas and questions from the text.

After briefly reviewing each section during our monthly meetings, our group has fallen into a routine of brainstorming applications for our library and new research ideas that build on the ones discussed in the text.  The ideas and questions that we came up with this time around include:

Applications for Virginia Tech Libraries

  • Use the Ask a Librarian logo throughout the library, including at:  service points, reference desks, on librarian’s doors, and on the ends of shelves.  This will help students identify and locate where they can ask for help, ultimately encouraging them to do so more often.  Bruce O. will be working with Lori to begin printing these signs so that we can start using them asap.
  • Develop better directional information–something similar to mall kiosks
  • Add pictures of the librarians and their subject areas to the electronic bulletin boards that are now throughout the library
  • Investigate first generation college students on campus and consider new ways to reach out to them (e.g., College Librarians sending a letter or email)
  • Identify and reach out to other special groups (e.g., returning students/nontraditional students) on campus that may need extra encouragement and friendliness in the library

New Research Ideas

  • Research the different regions (particularly of Virginia) that students are coming from.  Compare socioeconomic factors, high school size, whether or not the high school had a school librarian, etc.  This research question would include gathering information from the University, researching school systems, and developing an ethnographic component that would include talking with students
  • Investigate special groups on campus (see above)

Our next meeting will be Monday, December 10, and we will discuss the rest of the book (Chapters 8-10), in addition to talking about how we will implement the ideas that we’ve brainstormed throughout the semester.

 

Students and content

The Chronicle of Higher Education had an interesting article this morning “Teaching what you don’t know.”  James Lang made interesting points about how hard it is to teach a content you don’t know.  Lang refers to Therese Huston’s book  Teaching What You Don’t Know.   The book talks about instructors doing a better job explaining difficult concepts to students because they first had to break these concepts down so they themselves could understand them.   Sometimes as content experts we are so familiar with the concepts that we forget students don’t have the same kind of understanding we do.  Librarians can be terribly guilty of that.  To us the terms, reference and circulation have great meaning, understanding print indexes is second nature  - I mean who doesn’t love a good romp through a print index followed by a scavenger hunt through the library for the needed article. Not our students!  I had a conversation with a fellow librarian when I first came to Virginia Tech.  He was amazed to discover students had little experience with LC classification before they came here, therefore explaining why students asked for the non-fiction or fiction areas of the library.  I think it is important to go back to the beginning and look at those basic concepts and discover what part of student experience we take for granted  - that maybe they don’t have.

Posted in Uncategorized

LIBRARY update: Open Access Week for ‘extended campus’ users

This word just in about the Open Access Week events in the Virginia Tech Libraries that I listed in last week’s notice to my departments.

We are recording and archiving in the VTechWorks repository both lectures on Monday, Oct. 15th:

  • Network enabled research: The challenge for institutions, is scheduled for Monday, Oct 15, 5:30-6:30pm, in the Graduate Life Center auditorium.
  • Positioning Virginia Tech in the OA landscape, Monday, Oct 15, 6:45-7:45pm, GLC Auditorium

The technology mavens are are working out whether we can stream these live.

They are also working on ways that these Faculty Development Institute sessions could be make available online:

  • Introduction to VTechWorks (FDI session — requires  registration), Monday, Oct 15, 3-4:45pm, Torg 3080.
  • Introduction to Open Access (FDI session — requires  registration), Tuesday, Oct 16, 10-11:45am, Torg 3060.
Posted in Uncategorized

LIBRARY update: Open Access Week events; database changes; news in Newman Library

As part of our exploration of new models of scholarly communication, the Virginia Tech Libraries and the Graduate School will host a series of free presentations and workshops Oct 15-19, the sixth annual global Open Access Week, to raise awareness of “OA” and options Virginia Tech scholars have for providing the widest possible access to their research and scholarship.

Cameron Neylon, Director of Advocacy for the Public Library of Science, will serve as our OA Week keynote speaker and this year’s Distinguished Innovator in Residence.
His public lecture, “Network enabled research: The challenge for institutions,” is scheduled for Monday, Oct 15, 5:30-6:30pm, in the Graduate Life Center auditorium.

Other events:

  • Positioning Virginia Tech in the OA landscape, Monday, Oct 15, 6:45-7:45pm, GLC Auditorium
  • Introduction to VTechWorks (FDI session — requires  registration), Monday, Oct 15, 3-4:45pm, Torg 3080 (following our OA-relevant FDI  workshop on research data management plans)
  • Introduction to Open Access (FDI session — requires  registration), Tuesday, Oct 16, 10-11:45am, Torg 3060
  • Faculty panel, Open access: Opening the doors to scholarship for all, Wednesday, Oct 17, 5:30-6:30pm, Torg 3080
  • Graduate student panel, Open access: Opening the doors to scholarship for all, Wednesday, Oct 17, 6:45-7:45pm, Torg 3080
  • Knowledge Drive to register members to the university in the VTechWorks institutional repository: Monday-Friday, October 15-19, 11am-1pm in the lobbies of the Fralin Life Sciences Institute, the Graduate Life Center, and in Newman Library’s (4th floor) Port Research Commons.

I’m still waiting word from the organizers about how the programs might be made available to grad students and faculty in the Natioral Capital Region and other extended-campus locations.

At its simplest, OA provides an additional mean for making your scholarly outputs — including work in nontraditional media — available to larger audiences than traditional academic journals can provide.

In instances where journal aggregators like Ebsco make titles affordable to us at the cost of delayed (“embargoed”) access, OA makes your work available while it’s freshest. Similarly, when publishers yank journals from aggregators, as Taylor & Francis did with hundreds of titles once in Ebsco databases in late summer — without either party deigning to advise librarians — OA offers a kind of insurance policy.

OA can be as simple as putting versions of your publications in VTechWorks, our digital institutional repository — which some government funders here and abroad now mandate — where we will curate your work for you, preserve it across technological changes, and provide consistent exposure to public search engines. This is the “green” OA model.

Moreover, if you wish to publish in journals with author-pays (“gold”) OA policies, we are now partners with the office of the provost and office of research in an OA publication subvention fund.

The Open Access movement in scholarly communications arose as an alternative publication model to dependence commercial journal publishers (Elsevier, Wiley, Springer, Taylor & Francis [Routledge], Sage, ) and some scholarly societies (notably the American Chemical Society) whose subscription prices have long grown faster than libraries’ ability to justify. (See postscri[pt, below).  Most discussion in libraries and among policymakers about alternatives to subscription pricing has been grounded sci-tech publishing, which is of course the in which publication costs can be incorporated.

Use the Open Access Week events to let us know how your scholarship could be affected. Don’t let the STEM-centered approach to OA box you in, whether in campus policies or your own publication choices. Look at the American Historical Association’s statement about the journal business from last month, invoking other humanities and social science societies.

And beware of predatory OA publishers, which may be little more than vanity presses. Jeffrey Beall, a librarian at the University of Colorado-Denver, produces a watch list problematic OA publishers.

Database news

The ProQuest Congressional database is our primary resource for federal legislative history, congressional hearings, reports to Congress, reports by congressional committees, and the like.  It has  shifted to a new interface, completing its migration from being a LexisNexis product.   Links and bookmarks you may have for the old version may not work.
I find the new version more useful overall than any Lexis interface, though it doesn’t match the new interface ProQuest imposed on most of its other databases over the summer.
Don’t overlook the congressional information included in our  CQPress Electronic Library (including CQ Weekly and CQ Almanac for 1945-2011), Congressional Research Service Reports, and  HeinOnline databases.

Trial access to Book Review Index Online Plus will run through October 22. We have several online book review products already, as well as a print subscription to Book Review Index. If there’s much interest in this product we can pursue switching the print to online.

In exploring our new (and intriguing) Sage Research Methods database subscription, I discovered that Sage offers free access to its content through October. We already subscribe to many, though not all Sage journal packages as well as some of the ebooks in the Sage Knowledge platform.  The promotion requires individual registration for each product: < http://www.sagepub.com/freetrial2012/>.

When you use a trial, whether we set it up or the vendor makes it available to you, please share your thoughts with us about how well it could serve the research and teaching missions of our university .

Library updates

As other units across campus, the latest library strategic plan goes to the provost soon, aligned with the university long-range plan approved by the BOV in the spring.  It offers a broad map of the changes you are already seeing in the library’s services, collections, and physical environments.

Software available in campus labs (“CILS”) will be accessible in Newman library workstations on Oct 15, when the campus Learning Technologies unit takes over management of the public computers throughout the building. Library branches will probably be included later.
This change will not affect computers in the new Port Research Commons, which is intended to be a place to explore both high-priced commercial applications (including GIS and CAD as well as statistical and digital-humanities applications) and their open-source alternatives, nor the computers in either library classroom.

 New self-checkout machines and book returns are installed — rather inconspicuously — near the cafe and Bridge entrances to Newman Library.
The signage is sparse but the process is straightforward: use the handheld scanner on your ID card and the library barcode and print your receipt. Help phones directly to the circulation desk are installed at each self-check station.

The entrance to the first floor by the main elevators is being rebuilt.

Events hosted by the Virginia Tech Libraries:

Postscript: Journal economics

If you’re interested in data about the costs of journal subscriptions:

pragmatists vs idealist and why students don’t ask for help

Well it’s fall and the students are back and the group is working their way through a new book. This semester we are reading Duke and Asher’s “College Libraries and Student Culture.”  I thought the discussion about why student’s go to college was interesting.  Interesting in that the faculty wanted them to be there for the joy of learning.  As faculty I can understand and relate to that.  As a parent, I think the purpose is to get a job!!!! I remember the discussions with my oldest who wanted to major in clarinet performance.  Few musicians have been able to make performance their main gig.  We had many arguments – I would help her get the performance degree but she needed to find a way to pay the bills.  Hence the music education double major and a very ticked off daughter.  But now 7 years later a successful middle school band director who loves her students and enjoys her job… and pays her own bills!  Pragmatism vs idealism in real life. One of my big questions that I hope to answer – why students don’t ask for more help.  I am hoping to get some insight into this concept from the book.  Yesterday morning I was scanning my emails and came across a link to this article - Open Thread Wednesday: Encouraging Students to Use Office Hours.  A professor had the same question.  Loved his suggestions and the comments also had some ideas.  I took one important idea away from this article – the concept of getting the student to view you as collaborator rather than instructor.  Lots of new things to ponder… now I need to read the chapters in the book.

Posted in Uncategorized