posted on 2022-01-05, 15:20authored byNaoum Tsioptsias
This thesis addresses the idea that even "wrong" models in the field of Operational Research (OR) and Simulation, and more specifically in Discrete Event Simulation (DES), may still be useful and even offer learning for their users.
The inspiration for the topic resorts to literature where it is suggested that even if a model is viewed as "wrong", it may still entail certain benefits, especially if we consider that different people may perceive and evaluate models differently. Yet, there is a dearth of evidence to what constitutes to "wrong" models and their possible usefulness. This project aims to address these gaps using an empirical approach.
To achieve this aim, three objectives are addressed. The first objective identifies factors of wrongness in practice categorised under model characteristics. The second objective explores the extent to which "wrong" models are used in practice. The last objective investigates whether learning can occur from "wrong" models in practice. The implementation is based on previous works that have considered learning within OR and Simulation. Additionally, a specific learning framework from the field of psychology is utilised.
To accomplish the above objectives, two different studies are carried out, an exploratory study and an experimental study, addressing qualitatively and quantitatively the topic of wrongness The exploratory study conducts semi-structured interviews with DES modellers reporting their experience from models considered "wrong". A focused analysis also takes place in view of extreme cases of simplification, where simple and complex models considered as "wrong" models are examined with respect to their usefulness and learning. The experimental study consists of a laboratory-based experiment with students to test attitude changes towards a problem when using an "adequate" and an "oversimplified" model. The aim is to measure differences in learning, confidence, model perception and model usefulness between the two versions.
The outcomes of the interviews suggest that "wrong" models are encountered in practice, they can indeed be useful as well as offer learning for their users but also for the modellers involved. Specific factors of wrongness and uses of such models are found and commented upon while the exploration of learning leads to suggestions on how to deal with "wrong" models in OR and Simulation. Additionally, the findings from the experimental study support the indications that simple models can be "wrong" but still useful. Also, they may offer changes to users' beliefs, as users of the oversimplified model had similar learning outcomes to the users of the adequate model.
The thesis offers a detailed investigation of "wrong" models. The contributions are: identification of what is considered a "wrong" model in practice, possible uses to consider when "wrong" models are encountered, and evidence that learning can be acquired even from "wrong" models. These contributions expand the current literature and lead to a better understanding of model wrongness in simulation, including practical suggestions for the use of "wrong" models.