The problem of assessing problem solving: can comparative judgement help?
journal contributionposted on 13.04.2015 by Ian Jones, Matthew Inglis
Any type of content formally published in an academic journal, usually following a peer-review process.
School mathematics examination papers are typically dominated by short, structured items that fail to assess sustained reasoning or problem solving. A contributory factor to this situation is the need for student work to be marked reliably by a large number of markers of varied experience and competence. We report a study that tested an alternative approach to assessment, called comparative judgement, which may represent a superior method for assessing open-ended questions that encourage a range of unpredictable responses. An innovative problem solving examination paper was specially designed by examiners, evaluated by mathematics teachers, and administered to 750 secondary school students of varied mathematical achievement. The students’ work was then assessed by mathematics education experts using comparative judgement as well as a specially designed, resourceintensive marking procedure. We report two main findings from the research. First, the examination paper writers, when freed from the traditional constraint of producing a mark scheme, designed questions that were less structured and more problem-based than is typical in current school mathematics examination papers. Second, the comparative judgement approach to assessing the student work proved successful by our measures of inter-rater reliability and validity. These findings open new avenues for how school mathematics, and indeed other areas of the curriculum, might be assessed in the future.
This work was supported by a Royal Society Shuttleworth Research Fellowship to IJ a Royal Society Worshipful Company of Actuaries Research Fellowship to MI, and the Nuffield Foundation.
- Mathematics Education Centre