Exploring XAI for the arts: explaining latent space in generative music
Explainable AI has the potential to support more interactive and fluid co-creative AI systems which can creatively collaborate with people. To do this, creative AI models need to be amenable to debugging by offering eXplainable AI (XAI) features which are inspectable, understandable, and modifiable. However, currently there is very little XAI for the arts. In this work, we demonstrate how a latent variable model for music generation can be made more explainable; specifically we extend MeasureVAE which generates measures of music. We increase the explainability of the model by: i) using latent space regularisation to force some specific dimensions of the latent space to map to meaningful musical attributes, ii) providing a user interface feedback loop to allow people to adjust dimensions of the latent space and observe the results of these changes in real-time, iii) providing a visualisation of the musical attributes in the latent space to help people understand and predict the effect of changes to latent space dimensions. We suggest that in doing so we bridge the gap between the latent space and the generated musical outcomes in a meaningful way which makes the model and its outputs more explainable and more debuggable.
The code repository can be found at: https://github.com/bbanar2/Exploring_XAI_in_GenMus_via_LSR
Funding
Queen Mary University of London
UK Research and Innovation [grant number EP/S022694/1]
UKRI Centre for Doctoral Training in Artificial Intelligence and Music
China Scholarship Council
History
School
- Loughborough University, London
Published in
1st Workshop on eXplainable AI approaches for debugging and diagnosis (XAI4Debugging@NeurIPS2021)Source
1st Workshop on eXplainable AI Approaches for Debugging and DiagnosisPublisher
NeurIPSVersion
- VoR (Version of Record)
Rights holder
© the authorsPublication date
2021-10-17Copyright date
2021Publisher version
Language
- en