Abstract
When considering the variety of questions that can be used to measure students’ learning, instructors may choose to use multiple-choice questions, which are easier to score than responses to open-ended questions. However, by design, analyses of multiple-choice responses cannot describe all of students’ understanding. One method that can be used to learn more about students’ learning is the analysis of the open-ended responses students’ provide when explaining their multiple-choice response. In this study, we examined the extent to which introductory astronomy students’ performance on multiple-choice questions was comparable to their ability to provide evidence when asked to respond to an open-ended question. We quantified students’ open-ended responses by developing rubrics that allowed us to score the amount of relevant evidence students’ provided. A minimum rubric score was determined for each question based on two astronomy educators perception of the minimum amount of evidence needed to substantiate a scientifically accurate multiple-choice response. The percentage of students meeting both criteria of (1) attaining the minimum rubric score and (2) selecting the correct multiple-choice response was examined at three different phases of instruction: directly before lab instruction, directly after lab instruction, and at the end of the semester. Results suggested that a greater proportion of students were able to choose the correct multiple-choice response than were able to provide responses that attained the minimum rubric score at both the post-lab and post-instruction phases.
8 More- Received 1 November 2013
DOI:https://doi.org/10.1103/PhysRevSTPER.10.020103
This article is available under the terms of the Creative Commons Attribution 3.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
Published by the American Physical Society