posted on 2006-05-19, 15:21authored byJill Burstein, Claudia Leacock, Richard Swartz
Essay questions designed to measure writing ability, along with open-ended questions
requiring short answers, are highly-valued components of effective assessment
programs, but the expense and logistics of scoring them reliably often present a barrier
to their use. Extensive research and development efforts at Educational Testing
Service (ETS) over the past several years (see http://www.ets.org/research/erater.html)
in natural language processing have produced two applications with the potential to
dramatically reduce the difficulties associated with scoring these types of assessments.
History
School
University Academic and Administrative Support
Department
Professional Development
Research Unit
CAA Conference
Pages
52786 bytes
Citation
BURSTEIN et al, 2001. Automated evaluation of essays and short answers. Proeedings of the 5th CAA Conference, Loughborough: Loughborough University