From this week’s GEDI F18 blog prompts on assessment, I was most interested in Peter Elbow’s essay on forms of judgement: ranking, evaluating, and liking. I highlighted several of Elbow’s passages and found myself writing in the margins “YES! I can relate to that!” What resonated with me the most however, was his comment that ranking “is inaccurate or unreliable; it gives no substantive feedback; and it is harmful to the atmosphere for teaching and learning.” His example of grading unreliability – a single paper will receive the “full range of grades” from readers (based on research conducted in 1912!) – is extremely convincing.
Elbow also points out one of my major frustrations with grades – students tend to focus on the grade and ignore the feedback/evaluations. He says “I’m trying to get students to listen better to my evaluations—by uncoupling them from a grade.” I would argue in some cases that better “should” be deleted from the sentence, implying that students should simply listen to evaluations. I had a wonderful PhD adviser who told me to always include positive feedback on all of my student’s assignments. Before my students could submit their homework to a digital platform, I would often receive encouraging comments from them about my feedback. My comments made them feel good about their work and they looked forward to picking up their assignments. Last semester, however, I TA’d a lab where all of the assignments were submitted through Canvas. My issue with the digital platform is that a student can see their grade without going to their assignment. Therefore, students may not actually read the evaluations, especially if they are satisfied with their grade, and thus won’t benefit from the feedback.
I love that Elbow brakes down his grading into only two classifications, “Honors” and “Unsatisfactory.” From my limited teaching experience, I wholeheartedly agree that it’s easy to identify the good and the bad but ranking the students in the middle is ambiguous, time consuming, and I never feel good about doing it.
I also want to comment on Elbow’s statement that “many “A” students also end up doubting their true ability and feeling like frauds – because they have sold out on their own judgement and simply given teachers whatever yields an A.” I want to add to this that in other situations some “A” students might also feel that the quality of their work was unworthy of that A because they know that they did not put enough time, effort, or thought into an assignment, which is another issue with the grading/ranking form of assessment. In this situation, feedback is essential (e.g., maybe the quality of the writing was poor but the student hit on all of the major points of the assignment).
Elbow’s essay focuses on writing, thus I wonder how his ideas on evaluating and liking could be applied to the sciences. Would scientists ever embrace these ideas? Or do they prefer multiple choice assessments solely because they’re easy to grade and shy away from assignments that require more rigorous evaluation? While I was reading, I was trying to come up with some other forms of evaluating students in the sciences. My default was to still consider tests of knowledge without the stress of a grade, e.g., end-of-the-week, ungraded or bonus “quizzes” that are reviewed in class, or the replacement of mid-term exams with a semester-long project. I also pondered using participation “points” to encourage liking, e.g., participate by contributing to the course’s weekly folder an article related to the topic being discussed in class that week. Does anyone have another idea or comment on evaluating and liking in the sciences?