Automated feedback for a computer-adaptive test: a case study

This paper reports on an empirical study regarding the generation of automated feedback for a computer-adaptive test (CAT) application. In the study reported here, two groups of Computer Science undergraduate students participated in a session of assessment using our CAT application (N=106 and N=82). Participants had 40 minutes to answer 30 questions organised into 5 topics within the Visual Basic.Net subject domain. Participants were provided with feedback on CAT performance via a web-based application specially designed and implemented for this purpose. The feedback provided was divided into three sections: overall proficiency level, performance in each topic and recommended topics for revision. Thirty-one participants from the first group and 25 participants from the second group rated the usefulness of the feedback provided from 1 (not useful) to 5 (very useful). The mean values obtained for the usefulness of the feedback provided were respectively, 4.10 and 3.52. These results were taken to indicate that learners’ attitude towards the feedback approach employed was positive overall.