Automatic lip reading by using multimodal visual features

Shohei Takahashi, Jun Ohya

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

Original languageEnglish
Title of host publicationProceedings of SPIE-IS and T Electronic Imaging - Intelligent Robots and Computer Vision XXXI
Subtitle of host publicationAlgorithms and Techniques
DOIs
Publication statusPublished - 2014 Mar 17
EventIntelligent Robots and Computer Vision XXXI: Algorithms and Techniques - San Francisco, CA, United States
Duration: 2014 Feb 42014 Feb 6

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume9025
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

ConferenceIntelligent Robots and Computer Vision XXXI: Algorithms and Techniques
CountryUnited States
CitySan Francisco, CA
Period14/2/414/2/6

    Fingerprint

Keywords

  • Lip-reading
  • active shape model
  • face detection
  • multimodal features
  • support vector machine

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Cite this

Takahashi, S., & Ohya, J. (2014). Automatic lip reading by using multimodal visual features. In Proceedings of SPIE-IS and T Electronic Imaging - Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques [902508] (Proceedings of SPIE - The International Society for Optical Engineering; Vol. 9025). https://doi.org/10.1117/12.2038375