Empirical prediction of the measurement scale and base Level ‘Guess factor’ for advanced computer-based assessments
2006-05-24T14:51:24Z (GMT) by
In our experience, insufficient consideration is often given to the way in which the questions in computer-based assessments are scored. The advent of more complex question-styles such as those delivered by the TRIADSystem (Mackenzie, 1999) has made it much more difficult to predict the distribution of possible scores and the base level guess factor than it has been for tests containing simple multiple-choice questions. For example the TRIADS drag and drop template allows each object to be allocated a different score (positive or negative) for each position as well as allowing dummy objects and dummy positions to be defined. The number of score possibilities for a random answer increases dramatically as the number of objects and positions is increased and although a 0 to 100 scoring scale is available, scores are likely to be concentrated about ‘nodes’ on this measurement scale. The positions of these ‘nodes’ will vary with the structure of the question and negative or penalty scoring may serve to ‘smear’ the mark distribution between ‘nodes’. Many tutors may find it difficult to predict the guess factor and will not appreciate the effect that the structure of the question may have on the range and distribution of final scores achieved. In order to demonstrate to test designers the effects of question structure and score allocation on the ‘guess factor’ and mark distributions, we are developing an empirical Marking Simulator. This program allows test designers/tutors to select a question type, enter the proposed structure and scores for each question then view the mark distribution and measurement scale that would result from a set of entirely random answers. Use of the marking simulator should result in a more realistic setting of pass levels and generally enhance the quality of computer-based assessments.