A user centric approach to explainable Artificial Intelligence in industry
The emergence of Explainable AI (XAI) is driven by the need to address AI's inherent inability to clarify its decision-making processes. This capability has become increasingly critical in the context of the Industry 5.0 revolution, where rapid digitalisation demands AI systems that are both reliable and understandable. XAI offers a solution by providing explanations for AI-generated decisions, thereby enhancing confidence in AI systems across various industries. Without these explanations, end-users’ understanding of AI operations is significantly impaired, leading to diminished trust in AI outputs.
Current gaps in XAI research primarily stem from the following challenges: the unavailability of accessible and comprehensive information sources that enhance end-users’ understanding of XAI's utility and function; the lack of robust requirements elicitation methods tailored to address the cognitive needs of end-users seeking explanations; and the insufficient deployment of explanation systems designed with a user-centric approach that aligns with end-users' specific requirements. Addressing these gaps is crucial to advancing XAI’s effectiveness and adoption, ensuring that explanation systems are both practical and meaningful to their intended audience.
To address these gaps, this dissertation develops innovative approaches that highlight the utility and function of XAI within a digitalised ecosystem. It further introduces a novel requirements elicitation method specifically tailored to the cognitive needs of end-users, ensuring that XAI systems align with their expectations for trustworthy and understandable explanations. Additionally, the research explores contemporary planning methods that span the various phases of XAI implementation, focusing on technical specifications that directly address end-user requirements. By emphasising the delivery of high-quality explanations, this work aims to bridge the divide between technical feasibility and user-centric design in XAI systems.
This research proposes an Explainable AI (XAI) system architecture designed to help end-users better understand the utility and function of XAI within the broader AI ecosystem. It introduces the design and implementation of a Conceptual Model for Requirements Elicitation that specifically addresses end-users' needs for explanations, thereby enhancing their understanding, trust, and satisfaction with both AI and XAI systems. Moreover, the research outlines a structured approach for translating these captured requirements into technical specifications, facilitating the design, development, and deployment of user-centric XAI systems.
The findings indicate that when end-users are provided with a clear understanding of the capabilities and benefits of XAI, they are more effectively able to express their needs for explanations that improve their comprehension and trust in AI systems. The research further underscores the importance of identifying what aspects of the system need to be explained, how these explanations should be communicated, and the cognitive impact these explanations have on the end-users' understanding. This holistic approach ensures that XAI systems are not only technically sound but also aligned with the cognitive needs and expectations of their users. The integration of industrial case studies and open-source datasets, available under the Creative Commons license, further validates the broader applicability of the proposed methods and techniques, demonstrating their relevance across diverse contexts and domains.
The application of the conceptual model for requirements elicitation in an industrial setting proved to be highly impactful, enabling end-users to effectively articulate their intrinsic needs for explanations. This approach facilitated a deeper understanding of the specific explanations required by users, aligning the system design with their expectations. Additionally, the deployment planning of the XAI system was significantly enhanced by the Explainable Quality Function Deployment approach introduced in this research. This method provided a structured framework for ensuring that the system's design, development, and deployment directly addressed end-users' requirements, ultimately improving the overall effectiveness and user satisfaction with the XAI system.
History
School
- Mechanical, Electrical and Manufacturing Engineering
Publisher
Loughborough UniversityRights holder
© Maria AslamPublication date
2024Notes
A Doctoral Thesis. Submitted in partial fulfilment of the requirements for the award of the degree of Doctor of Philosophy of Loughborough University.Language
- en
Supervisor(s)
Diana Segura-Velandia ; Yee Mey GohQualification name
- PhD
Qualification level
- Doctoral
This submission includes a signed certificate in addition to the thesis file(s)
- I have submitted a signed certificate