posted on 2006-05-25, 17:17authored byRichard A. Bacon
In previous papers (Bacon 2003, 2004) the author has described some of the results obtained from surveys of the use of the SToMP testing system for the coursework assessment of one first year and one second year module within a Physics Degree programme. This paper will deal with progressive modifications that have been made as a result of student feedback from these trials and with preliminary analysis of the feedback obtained from the students using the updated tests.
The SToMP testing system was written in 2001/2 as a direct implementation of the IMS-QTI v1.2 specification, but includes several extensions for handling numeric problems of the type frequently found within science and engineering courses. Such problems typically require a numeric answer to be judged by its precision (e.g. the number of significant figures) as well as its accuracy (i.e. whether the value falls within a specified range). The system must also be able to recognised alternative forms of the same value and precision in scientific format. These features were mapped onto a suite of extensions to the QTI specification for ease of implementation, and included other features such as alternative number bases and the randomisation of values within questions.
One of the features supports the propagation of errors in multi-part numeric problems. A wrong answer to an early part of the problem is remembered by the system and used to generate alternative answers to later parts. This means that the student making such a mistake need only be made to lose marks for the part in which they made the mistake, not in later parts. A previous year's trial of this feature, although it contributed to student's marks, was not appreciated by the students because they had not been informed of it. The results of this year's trial will be reported, where students were informed (despite academic misgivings that this would affect the care with which they prepared their answers).
The type of numeric problem mainly dealt with in this paper, is where one or more formulae have to be identified by the student as being appropriate, they have to be solved for the parameter required by the question, suitable values have to be substituted for the expression's parameters and then a final value calculated. A previous paper (Bacon 2004) described a partially successful strategy for assessing the quality of a student's working when they arrived at a wrong final answer. This involved the student entering the numeric expression from which they obtained the answer, as well as the final value itself. Half the respondents to the survey of this system complained of "no marks being available for their working", which was not unreasonable since the expression was only evaluated. This paper will describe further work that has been done on the analysis of the structure of the students' expressions. This has led to more detailed marking and more helpful feedback being available to those students who do not get their answer or their expression correct. The system will be described and students' perceptions will be reported.
History
School
University Academic and Administrative Support
Department
Professional Development
Research Unit
CAA Conference
Pages
80799 bytes
Citation
BACON, R.A., 2005. Discriminative Marking of Numeric Problems. IN: Proceedings of the 9th CAA Conference, Loughborough: Loughborough University