Loughborough University
Browse

Advancing healthcare AI: Methods to improve robustness, transparency and user-centric integration

Download (12.82 MB)
thesis
posted on 2025-04-07, 12:55 authored by Andrew Houston

Artificial Intelligence (AI) has emerged as a promising tool in healthcare, offering a means of augmenting clinical decision-making with the aim of improving patient outcomes. However, barriers such as a lack of trust, limitations regarding data availability and inadequate explanation mechanisms have been shown to limit effective development and adoption. This thesis focuses on AI methods specifically for classification tasks, addressing these limitations by proposing techniques to improve the robustness and transparency of AI classification predictions, while considering the needs of end-users in front-end interface design.

Homogeneity in the complexity of publicly available medical datasets can limit the robustness of the evaluation of newly developed algorithms. Here, the “homogeneity” could manifest in publicly available datasets all sharing similar levels of class overlap or a lack of edge cases. Therefore, the first contribution of this thesis is Sy:Boid, a novel complexity-based synthetic data generation method. Sy:Boid offers a means of creating controllably difficult yet realistic datasets, optimising complexity using a multi-objective genetic algorithm. Compared with existing state-of-the-art approaches, Sy:Boid proved effective in generating more diverse datasets with classification performance that more closely resembles that of realworld datasets, providing a means of more rigorously evaluating new learning algorithms and efficient creation of knowledge bases for meta-learning.

Understanding when to act upon the predictions of an AI model is paramount for its safe and considered integration within existing clinical pathways. Existing approaches either fail to reflect such information in terms easily understood by lay-person clinicians or are affected by factors such as the calibration, accuracy, or overall quality of the model. Additionally, some approaches are intrinsic to the specific model’s decision-making process, making them difficult to transfer to other models. Therefore, the second contribution of this thesis is a suite of classindependent meta-heuristics which are embedded into a meta-learning framework to quantify decision-making uncertainty. The proposed methods improve upon commonly used algorithms for identifying instances at risk of misclassification by producing uncertainty estimates that are more independent of model accuracy and calibration, whilst maintaining comparable levels of discrimination. The proposed heuristics and framework show potential for enhancing model development and transparency.

Having a streamlined means of integrating newly developed solutions within the existing machine learning pipeline is integral to the uptake of such methods. As such, the third contribution of this thesis is AITIA, an AI Trust by Interrogation Python library, which integrates the methodologies developed in the preceding chapters. AITIA offers user-friendly functions for assessing model competencies, determining the robustness of models against concept drift, conducting misclassification analyses and implementing model abstention based on uncertainty. To demonstrate the functionality of the library a case study is performed, developing a model to predict return-to-work outcomes in a military population of patients diagnosed with chronic exertional compartment syndrome.

It is important, when developing end-user-facing methods to understand the specific needs and expectations of clinicians about AI so that well-considered interfaces can be curated to maximise adoption and effective use. The final contribution of this thesis is a mixed-methods study exploring healthcare professionals’ perceptions and preferences regarding AI explainability methods. Using questionnaires and focus groups, the study findings provide insights into the effectiveness of different explainability techniques which can be used to inform the development of AI interfaces to enhance end-user engagement and adoption in clinical practice.

History

School

  • Science

Department

  • Computer Science

Publisher

Loughborough University

Rights holder

© Andrew David Houston

Publication date

2024

Notes

A Doctoral Thesis. Submitted in partial fulfilment of the requirements for the award of the degree of Doctor of Philosophy of Loughborough University.

Language

  • en

Supervisor(s)

Georgina Cosma

Qualification name

  • PhD

Qualification level

  • Doctoral

This submission includes a signed certificate in addition to the thesis file(s)

  • I have submitted a signed certificate

Usage metrics

    Computer Science Theses

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC