The generation of automated student feedback for a computer-adaptive test
conference contributionposted on 25.05.2006, 15:31 by Mariana Lilley, Trevor Barker, Carol Britton
This paper marks further progression on research previously undertaken at the University of Hertfordshire on the use of computer-adaptive tests (CATs) in Higher Education. Findings from two previous empirical studies by the authors suggested that the CAT approach was a fair assessment method, capable of offering accurate and consistent measurement of student abilities. Participants in a pedagogical evaluation of the application indicated that one of the limitations of the approach was the type of the feedback provided to students. According to the evaluators, the sole provision of a score would not help students to detect their educational needs. Providing students with a copy of all questions they got wrong did not seem an attractive option either, as it could jeopardise the re-use of these questions in future assessment sessions. Furthermore, it seemed unlikely that providing students with the questions alone, without any comment or explanation, would foster research and/or reflection skills. This paper reports on our most recent empirical study, in which the ability estimate θ for each student in each section of the CAT test was used to generate automated feedback based on Bloom's taxonomy of cognitive abilities. The feedback was then sent directly to individual students via personal email. In the first section of this paper, we present an overview of our CAT research followed by the main characteristics of the feedback tool we designed and implemented. In the final section of this paper, we present the results a summary of how learners performed on the CAT, along with student attitude towards the automated feedback. In addition, we present our views on how the work described here can be developed further.
- University Academic and Administrative Support
- Professional Development
- CAA Conference