zamri_mulvaney_jvci.pdf (3.85 MB)

Geometrical-based lip-reading using template probabilistic multi-dimension dynamic time warping

Download (3.85 MB)
journal contribution
posted on 07.08.2015 by M.Z. Ibrahim, David Mulvaney
By identifying lip movements and characterizing their associations with speech sounds, the performance of speech recognition systems can be improved, particularly when operating in noisy environments. In this paper, we present a geometrical-based automatic lip reading system that extracts the lip region from images using conventional techniques, but the contour itself is extracted using a novel application of a combination of border following and convex hull approaches. Classification is carried out using an enhanced dynamic time warping technique that has the ability to operate in multiple dimensions and a template probability technique that is able to compensate for differences in the way words are uttered in the training set. The performance of the new system has been assessed in recognition of the English digits 0 to 9 as available in the CUAVE database. The experimental results obtained from the new approach compared favorably with those of existing lip reading approaches, achieving a word recognition accuracy of up to 71% with the visual information being obtained from estimates of lip height, width and their ratio.

History

School

  • Mechanical, Electrical and Manufacturing Engineering

Published in

JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION

Volume

30

Pages

219 - 233 (15)

Citation

IBRAHIM, M. and MULVANEY, D.J., 2015. Geometrical-based lip-reading using template probabilistic multi-dimension dynamic time warping. Journal of Visual Communication and Image Representation, 30, pp.219-233

Publisher

© Elsevier

Version

NA (Not Applicable or Unknown)

Publisher statement

This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/

Publication date

2015

Notes

This paper is in closed access.

ISSN

1047-3203

Language

en

Exports