Automated evaluation of essays and short answers

Essay questions designed to measure writing ability, along with open-ended questions requiring short answers, are highly-valued components of effective assessment programs, but the expense and logistics of scoring them reliably often present a barrier to their use. Extensive research and development efforts at Educational Testing Service (ETS) over the past several years (see http://www.ets.org/research/erater.html) in natural language processing have produced two applications with the potential to dramatically reduce the difficulties associated with scoring these types of assessments.

Categories

Keyword(s)

License

CC BY-NC-ND 4.0