Dec 4 2016
Evaluating the Evaluation of Educators
I have been teaching at Virginia Tech since the spring semester of 2015. The teaching bug bit me and bit me hard. I thoroughly enjoy teaching students, regardless of how aggravated I get with some students. I do what I can to reach out to students that might be struggling. However, the student has to meet me halfway. Unfortunately, some students do not accept my offer for assistance. These students, who typically earn lower grades in my courses, are most likely the students that give me poor reviews on my SPOT surveys. High grades are not given freely in my courses. I have received some awful comments on my evaluations, but fortunately I tend to have more students give me positive evaluations. I do know of a few cases where adjuncts were dismissed due to poor student evaluations and I have also overheard groups of students state that they will group together and make sure each of them intentionally gives an instructor a negative evaluation because they do not like the instructor or agree with what the instructor is attempting to teach them.
According to Inside Higher Ed, “A number of studies suggest that student evaluations of teaching are unreliable due to various kinds of biases against instructors. Yet conventional wisdom remains that students learn best from highly rated instructors; tenure cases have even hinged on it.
What if the data backing up conventional wisdom were off? A new study suggests that past analyses linking student achievement to high student teaching evaluation ratings are flawed, a mere “artifact of small sample sized studies and publication bias.”
‘Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between [student evaluations of teaching, or SET] ratings and learning,’ reads the study, in press with Studies in Educational Evaluation. ‘Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between [evaluation] ratings and learning.’
These findings ‘suggest that institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty’s teaching effectiveness,’ the study says.”
“The entire notion that we could measure professors’ teaching effectiveness by simple ways such as asking students to answer a few questions about their perceptions of their course experiences, instructors’ knowledge and the like seems unrealistic given well-established findings from cognitive sciences such as strong associations between learning and individual differences including prior knowledge, intelligence, motivation and interest,” the paper says. “Individual differences in knowledge and intelligence are likely to influence how much students learn in the same course taught by the same professor.”
December 4, 2016 @ 16:41
My experience, too, is that high-achieving students will readily avail any offer of assistance, while the students for whom the assistance was designed will not take it. Maybe they are waiting for a time when they feel that they are “worthy” of appearing at the instructor’s door without shame, with all their problems solved. Somehow, it seems that the highly successful students do not participate in the evaluation process as visibly as others. It may be that the entire process of rating instructors is so deeply flawed that it should be scrapped. There must be a better way.
December 5, 2016 @ 19:02
Although there are a lot of limitations in SPOT, it still exists and plays an important part in teaching evaluation. Every teacher wants to have high student participation in SPOT. So SPOT exists for reasons, maybe SPOT itself need to be improved and the way to evaluate SPOT need to be improved.
cara ternak jangkrik
August 31, 2017 @ 04:57
thanks for the very useful information