Facial expression recognition has been intensively studied for decades, notably by the psychology community and more recently the pattern recognition community. What is more challenging, and the subject of more recent research, is the problem of recognizing subtle emotions exhibited by so-called micro-expressions. Recognizing a micro-expression is substantially more challenging than conventional expression recognition because these micro-expressions are only temporally exhibited in a fraction of a second and involve minute spatial changes. Until now, work in this field is at a nascent stage, with only a few existing
micro-expression databases and methods. In this article, we propose a new micro-expression recognition approach based on the Eulerian motion magnification technique, which could reveal the hidden information and accentuate the subtle changes in micro-expression motion. Validation of our proposal was done on the recently proposed CASME II dataset in comparison with baseline and state-of-the-art methods. We achieve a good recognition accuracy of up to 75.30% by using leave-one-out cross validation evaluation protocol. Extensive experiments on various factors at play further demonstrate the effectiveness of our proposed approach.
Funding
This work is supported by the TM Grant under project UbeAware and 2beAware,and Zhejiang
Provincial Natural Science Foundation of China (Grant Nos. LQ14F020006).
History
School
Loughborough University London
Published in
Multimedia Tools and Applications
Citation
WANG, Y. ... et al, 2016. Effective recognition of facial micro-expressions with video motion magnification. Multimedia Tools and Applications, 76 (20), pp. 21665–21690.
This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/
Acceptance date
2016-10-16
Publication date
2016
Notes
The final publication is available at link.springer.com via http://dx.doi.org/10.1007/s11042-016-4079-6.