Thoughts on ranking, evaluating, and liking

From this week’s GEDI F18 blog prompts on assessment, I was most interested in Peter Elbow’s essay on forms of judgement: ranking, evaluating, and liking. I highlighted several of Elbow’s passages and found myself writing in the margins “YES! I can relate to that!” What resonated with me the most however, was his comment that ranking “is inaccurate or unreliable; it gives no substantive feedback; and it is harmful to the atmosphere for teaching and learning.” His example of grading unreliability – a single paper will receive the “full range of grades” from readers (based on research conducted in 1912!) – is extremely convincing.

Elbow also points out one of my major frustrations with grades – students tend to focus on the grade and ignore the feedback/evaluations. He says “I’m trying to get students to listen better to my evaluations—by uncoupling them from a grade.” I would argue in some cases that better “should” be deleted from the sentence, implying that students should simply listen to evaluations. I had a wonderful PhD adviser who told me to always include positive feedback on all of my student’s assignments. Before my students could submit their homework to a digital platform, I would often receive encouraging comments from them about my feedback. My comments made them feel good about their work and they looked forward to picking up their assignments. Last semester, however, I TA’d a lab where all of the assignments were submitted through Canvas. My issue with the digital platform is that a student can see their grade without going to their assignment. Therefore, students may not actually read the evaluations, especially if they are satisfied with their grade, and thus won’t benefit from the feedback.

I love that Elbow brakes down his grading into only two classifications, “Honors” and “Unsatisfactory.” From my limited teaching experience, I wholeheartedly agree that it’s easy to identify the good and the bad but ranking the students in the middle is ambiguous, time consuming, and I never feel good about doing it.

I also want to comment on Elbow’s statement that “many “A” students also end up doubting their true ability and feeling like frauds – because they have sold out on their own judgement and simply given teachers whatever yields an A.” I want to add to this that in other situations some “A” students might also feel that the quality of their work was unworthy of that A because they know that they did not put enough time, effort, or thought into an assignment, which is another issue with the grading/ranking form of assessment. In this situation, feedback is essential (e.g., maybe the quality of the writing was poor but the student hit on all of the major points of the assignment).

Elbow’s essay focuses on writing, thus I wonder how his ideas on evaluating and liking could be applied to the sciences. Would scientists ever embrace these ideas? Or do they prefer multiple choice assessments solely because they’re easy to grade and shy away from assignments that require more rigorous evaluation? While I was reading, I was trying to come up with some other forms of evaluating students in the sciences. My default was to still consider tests of knowledge without the stress of a grade, e.g., end-of-the-week, ungraded or bonus “quizzes” that are reviewed in class, or the replacement of mid-term exams with a semester-long project. I also pondered using participation “points” to encourage liking, e.g., participate by contributing to the course’s weekly folder an article related to the topic being discussed in class that week. Does anyone have another idea or comment on evaluating and liking in the sciences?

9 Replies to “Thoughts on ranking, evaluating, and liking”

  1. Thank you, Kristine, for sharing your stories. I agree with you on using the quantitative assessment instead of the numbers or letters. Yet, sometimes the quantitive assessment would be as harmful as the numbers or letters. We should try to write positive and motivating feedback that can help students do better.
    I remember one of my friends who is doing a PhD at VT in the computer engineering department. In this field, it’s very hard to publish in conferences, even it’s harder than publishing in journals! Every time, my friend submits a paper it gets rejected with a negative feedback, but his advisor was wise and kept asking him to not just read the negative but also the positive feedback so his advisor would go over the positive feedback to make him feel his work has a lot of good things but he might need to improve a little bit.

  2. I totally empathize with your experience of students not reading your comments. When I started grading weekly assignments in canvas, it was often the lowest performers I spent the most amount of time providing feedback to, because I wanted to help them improve their grades. However, I noticed no improvement over the semester from those students…

    Also, I take “a single paper will receive the “full range of grades” from readers” as an indictment of the rubric not grades. If the rubric is clear, then readers’ grade range would be more clustered. Not only would students write to the rubric, but graders would understand how to grade.

    You ask whether the sciences can get rid of quantitative “knowledge-testing” evaluations… and it’s a bit mixed. There is some information that some careers must know, and therefore should be evaluated on. Other courses/sections of courses, could evaluate differently- having group work/discussions that are essentially pass/fail or projects that are about applying ideas previously presented in class (these are my favorite).
    However, a lot of intro science courses at large universities are huge. Grading anything other than a scantron is far easier than even just entering hundreds pass/fail evaluations for hundreds of students.

  3. I relate to what you said about students not reading your feedback and I’ll admit I’m also guilty of this. I have quite a few issues with a Canvas and this is a big one. I did have one teacher in high school and in college that would not give you back a paper or grade without a personal meeting where you went over the feedback together. I have also found that theniteratve process of sequential submissions where students can be guided to better analysis often works better for me, maybe because i still want the grade, but also because i actually improve,

    • Hey Ishi,
      I think the personal meeting requirement is great! Also, thanks for admitting that you don’t look for feedback. I’m guilty of this too. Thanks for your comment. I look forward to discussing this more in class on Wednesday.

  4. One thing I kept thinking of with this reading was the difference between graduate and undergraduate classes. I can relate to the student experience/actions you bring up (not reading feedback on good grades, etc) in terms of my past experiences from back when I was an undergraduate, but all of my graduate courses have been essentially pass-fail, even if there are letter grades assigned in the end. They’re based on creating a product that actually gives insight into how research is done and gives perspective on where current research in an area is heading.
    I don’t think that implementing these types of expectations would be unreasonable for undergraduates. It may even address some of the issues we’ve discussed in previous weeks of this class related to teaching a swiftly changing subject.

    • Hey Dana,
      You bring up two great points here, pass-fail could be used in undergraduate courses and product-based courses could be used to address subjects that are swiftly changing. I’m sure the push-back from undergraduate instructors regarding would be that their classes are typically larger than graduate level courses. I look forward to discussing this further in class on Wednesday. Thanks for your comment!

  5. I think you are bringing up essentially the difference between quantitative and qualitative feedback for students. I am a self-proclaimed qualitative researcher, and while numbers often have their place, you could argue that they have ruined education. As you noted, words, especially positive ones, resonate with people. However, it students see the grade and are happy with it, they typically ignore the written comments–you know, the ones I took so long to craft for them! Perhaps if we moved to an entirely qualitative approach with grading, students would learn to ask more questions, get more feedback, and become more detail-oriented overall. These things could change education.

    • Kristen & Kathleen!
      Interesting thread y’all. I agree with you both–am also a believer in qualitative methods for assessment because I think it gives us greater flexibility when trying to understand what it is that our students have picked up from us in class, projects, and other types of assignments. I can relate to the frustration of carefully crafting feedback for students–many of which never see the comments or make improvements based on them. That can be so frustrating.

      Looking back on my early years as a TA in Landscape Architecture and being asked to “grade” drawings and sketches. I felt so utterly uncomfortable trying to assess the student work when there was so much variety in ability and intent. If I knew then what I do now, I would have done more to reassure and guide students. I wielded a red pen with no mercy for anyone. (With no rubric, I thought this was the only fair way to go.) Looking back, I’m not proud of what I did. I was the “mean TA that graded hard” because I didn’t know there were other ways of being in that position. I hope I didn’t kill the love of drawing for any budding artists. The thought makes me sit straight up in bed at night. :( Never again.

      • Hey Kathleen and Sara,
        Agreed, qualitative assessment is the way to go if we want students to improve, learn, and feel better about their progress. Kathleen, from my experience, and perhaps yours as well, it’s not only the students who are happy with their grades who aren’t reading the feedback, it’s also the students who are complacent with their grades, e.g., the students who are used to getting C’s. I wonder how the quality of the C student work would improve if they weren’t graded until the very end of the semester? Sara, I can relate to your early years as a TA. I’d say most TAs aren’t encouraged to be positive when it comes to grading. The first time I TA’d as a senior undergraduate student I scribbled in red pencil all over my student’s reports. Then, I TA’d for my graduate school adviser and she told me I could only grade in PURPLE pen. That in itself made a huge difference. Even if I was writing “not quite 0,” the students felt better about it. Later, I had a similar experience TA’ing Physical Chemistry lab where the students have to write full lab reports. Report after report, I would cover some students’ papers with my purple pen comments and suggestions. At the end of the semester, some of my students commented that I was a hard grader and that they had to work harder on their reports than the students in other sections. However, they were happy about it because they actually learned how to improve their report writing. For me, that felt like an accomplishment. Thanks for your comments!

Leave a Reply

Your email address will not be published.