Characteristics of “Good” Assessments, or Characteristics of Assessments that are actually aligned with Pedagogy

“One of the most dangerous and persistent myths in American education is that the challenges of assessing student learning will be met if only the right instrument can be found – the test with psychometric properties so outstanding that we can base high-stakes decisions on the results of performance on that measure alone” (Association of American Colleges and Universities, 2007, p. vii).

[I] Good assessments yield reasonably accurate and truthful generalizable evidence of student learning that is used to inform meaningful, substantive changes to teaching and learning.

  • Good assessments are clear to everyone.
  • Good assessments have an appropriate range of outcome levels.
  • Good assessments maintain an appropriate balance among quality, dependability, and usefulness.

[II] Good assessments focus on important learning goals.

[III] Good assessments include direct evidence of student learning (Suskie, 2018).

  • “No assessment of knowledge, understanding, or thinking or performance skills should consist of indirect evidence alone” (para. 9).

[IV] Good assessments are significant (Suskie, 2018).

  • “Quizzes, homework, and class discussions can give us insight into whether students are on track to achieve key learning goals” (para. 10).
  • “But far more useful are significant learning activities that ask students to demonstrate or perform the skills they have learned, especially learning activities that mirror real-world experiences” (para. 10).

[V] Good assessments are fair, unbiased, and conducted ethically.

 

Selected References

Association of American Colleges and Universities. (2007). A brief history of student learning assessment: How we got where we are and a proposal for where to go next. Washington, DC: R. J. Shavelson, C. G. Schneider, & L. S. Shulman.

Suskie, L. (2018). Assessing student learning: A common sense guide. San Francisco: Jossey-Bass.

 

Additional References

Banta, T. (2008). Trying to clothe the emperor. Assessment Update, 20(2), 3-4.

Berrett, D. (2015, September 21). The unwritten rules of college. Chronicle of Higher Education. Retrieved from https://www.chronicle.com/

Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. Alexandria, VA: Association for Supervision & Curriculum Development.

Goggins Selke, M. J. (2013). Rubric assessment goes to college: Objective, comprehensive evaluation of student work. Plymouth, United Kingdom: Rowman & Littlefield.

Grove, J. (2016, September 29). Mature students do better with non-written assessment. Times Higher Education. Retrieved from https://www.timeshighereducation.com/

Huot, B. (1990). The literature of direct writing assessment: Major concerns and prevailing trends. Review of Educational Research, 60(2), 237-263.

Lane, S. (2010). Performance assessment: The state of the art. Stanford, CA: Stanford Center for Opportunity Policy in Education.

Lane, S. (2012). Performance-based assessment. In J. H. McMillan (Ed.), SAGE Handbook of Research on Classroom Assessment (pp. 313-330). Thousand Oaks, CA: Sage.

Linn, R. L., Baker, E. L., & Dunbar, S. B. (1991). Complex, performance-based assessment: Expectations and validation criteria. Educational Researcher, 20(8), 15-21.

Parkes, J. (2012). Reliability in classroom assessment. In J. H. McMillan (Ed.), SAGE Handbook of Research on Classroom Assessment (pp. 125-144). Thousand Oaks, CA: Sage.

Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing, 15(1), 18-39.

This entry was posted in Preparing the Future Professoriate. Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.