This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues.
From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual
part, two strategies are considered. First, facial landmarks’ geometric relations, i.e., distances and angles, are computed. Second, we
summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In
order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the
classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/
stacking fashion. The experiments conducted on the SAVEE, eNTERFACE’05, and RML databases show significant performance
improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases
Funding
This work has been partially supported by the Estonian
Research Grant (PUT638), Spanish projects TIN2013-43478-P and TIN2016-74946-P and the European Commission Horizon 2020 granted project SEE.4C under call H2020-ICT-2015, and the Estonian Centre of Excellence in IT (EXCITE) funded
by the European Regional Development Fund and the European Network on Integrating Vision and Language (iV&L
Net) ICT COST Action IC1307.
History
School
Loughborough University London
Published in
IEEE Transactions on Affective Computing
Citation
NOROOZI, F. ... et al., 2017. Audio-visual emotion recognition in video clips. IEEE Transactions on Affective Computing, 10(1), pp. 60 - 75.
This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/