Selected Response Assessments
Over the next few weeks, I plan to spend some time covering the basics of the different sorts of assessment tools that are available for our courses. Each of these techniques have their own unique strengths and weaknesses. I do not view any one of them as the “ideal” method for assessment. They are, after all just tools that we use in trying to measure student learning, instructional effectiveness, and programmatic efficacy. Like any other tool – they can be handled skillfully or they can be poorly wielded. The fact that some amateur carpenters might injure themselves while using a nail gun does not the nailer a poor tool. A poorly conceived assessment strategy is similarly unproductive (though typically less physically painful). A master carpenter will use a variety of tools (each carefully chosen and carefully applied) to complete a task. Likewise, when we intelligently select from our assessment toolbox and apply them in our courses we can greatly enhance our students’ learning. This week, I would like to begin with a much-maligned assessment tool: the selected response (multiple-choice) exam.
Selected-response assessments are composed of a series of questions or statements (items) that the students must answer. The question or statement for each item is usually called the stem. The students pick from a variety of potential answers – the correct response and one or more incorrect choices (usually called the distractors). The number of distractors may vary. However, most instructors use between one and four.
Selected-response assessments have several important advantages (which is why this type of assessment is still so commonplace). The following is a brief list of some of the chief advantages of this sort of assessment:
- They are objectively scored (either correct or incorrect)
- They are easy to automate (Scantron, clickers, and online exams can be used)
- They are quick to respond to (more questions and be asked on an exam
- They are easy to analyze (there are a variety of statistical tests that can be applied to MC questions
- They are relatively easy to construct and modify over time
Selected response assessment have gotten quite a bit of bad press over the past couple of decades. Much of this criticism was very deserved. However, the real weaknesses that have been pointed out reflect more on poor assessment implementation rather than a fundamental weakness in the assessment format. Some weaknesses that are often attributed to selected-response assessments include:
- Multiple-choice questions do not test higher levels of cognition (applying, analyzing, evaluating, and creating)
- Multiple-choice questions favor students that are test-wise
- Multiple-choice questions reward guessing rather than knowing
- Multiple-choice questions are not authentic assessments (they do not reflect real-world situations)
- Multiple-choice questions give very limited feedback (formative assessment) to the students
Over the next few days, I plan to share with you some of my thoughts and experiences with this sort of assessment. I think that I have some very practical and useful suggestions to leverage this sort of assignment in many different (but not all) courses. Tomorrow, I will discuss how to construct MCQ for maximum effect. See you then.