Video assisted speech source separation.pdf (255.7 kB)

Video assisted speech source separation

Download (255.7 kB)
conference contribution
posted on 14.12.2009, 12:18 by Wenwu Wang, Darren Cosker, Yulia Hicks, Saeid Sanei, Jonathon Chambers
In this paper we investigate the problem of integrating the complementary audio and visual modalities for speech separation. Rather than using independence criteria suggested in most blind source separation (BSS) systems, we use the visual feature from a video signal as additional information to optimize the unmixing matrix. We achieve this by using a statistical model characterizing the nonlinear coherence between audio and visual features as a separation criterion for both instantaneous and convolutive mixtures. We acquire the model by applying the Bayesian framework to the fused feature observations based on a training corporus. We point out several key exisiting challenges to the success of the system. Experimental results verify the proposed approach, which outperforms the audio only separation system in a noisy environment, and also provides a solution to the permutation problem.



  • Mechanical, Electrical and Manufacturing Engineering


WANG, W. ... et al., 2005. Video assisted speech source separation. IN: Proceedings of 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2005), Philadelphia, Pennsylvania, USA, 18-23 March, Vol.5, pp. 425-428.




VoR (Version of Record)

Publication date



This is a conference paper [© IEEE]. It is also available at: Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.





Usage metrics

Loughborough Publications