The need for evidence innovation in educational technology evaluation
2014-09-03T12:54:21Z (GMT) by
More complex and chaotic methods are being adopted in the development of technology to enhance learning and teaching in higher education today in order to achieve innovation in teaching practice. However, because this type of development does not conform to a linear process-driven order, it is notoriously difficult to evaluate its success as a holistic educational initiative. It is proposed that there are five factors that impact on effective educational technology evaluation, which contributes to insubstantial evidence of positive outcomes, these being: premature timing; inappropriate software evaluation techniques and models; lack of shared understanding of the terminology or the semantics of education technology; the growing complexity of agile and open development; and the corporatisation of higher education. This paper suggests that it is no longer helpful for policy makers to evaluate whether educational technology project outcomes were successful or unsuccessful but instead they should use agile evaluation strategies to understand the impact of the product, process and outcomes in a changing context. It is no longer useful to ask the question, ‘did the software work?’ The key is for software developers and policy-makers to ask ‘what type of software works, in which conditions and for whom?’ To understand this, the software development community needs to look at adopting evaluation strategies from the social science community. For example, realist evaluation supplies context driven and evidence-based techniques, exploring outcomes that tend towards the social rather than technical. It centres on exploring the ‘mechanisms’, ‘contexts’ and ‘outcomes’ associated with an intervention and is a form of theory-driven evaluation that is the theory and reasoning of its stakeholders that is rooted in practitioner wisdom.