Barriers to trust in AI: A study of the explainability technologies adoption in banks
With rapidly moving technological progress, especially in the fields of artificial intelligence (AI) and machine learning (ML), new prospects are open for banks and financial institutions. The research aims to understand the barriers to trust in AI by banks. The study is based on qualitative research on drivers of adopting explainability technologies in banks that improve transparency and perception of complex AI algorithms. The paper has identified the main issues related to AI algorithms employment in Eastern European banks and IT AI/ML developers by applying human-centred and ethics-by-design approaches, combined with interviews of the senior stakeholders. The results obtained outline a contrasted view of the AI barriers, and also underline the importance of using AI/ML technologies in banks, the increasing readiness to implement them more extensively, and the challenging aspects in the implementation of AI/ML systems mostly associated with costs and economic efficacy. The study geography was chosen since there is sharp growth in AI/ML adoption in Eastern European banks. In each of the surveyed banks, however, “leapfrogging” AI/ML paths had their own drivers and challenges.
History
School
- Loughborough Business School
Published in
2024 14th International Conference on Advanced Computer Information Technologies (ACIT)Pages
230 - 234Source
2024 14th International Conference on Advanced Computer Information Technologies (ACIT)Publisher
IEEEVersion
- VoR (Version of Record)
Rights holder
© IEEEPublisher statement
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Publication date
2024-10-16Copyright date
2024ISBN
9798350350043; 9798350350050ISSN
2770-5218eISSN
2770-5226Publisher version
Language
- en