Automatic speech recognition improved by two-layered audio-visual integration for robot audition

Takami Yoshida, Kazuhiro Nakadai, Hiroshi G. Okuno

Research output: Chapter in Book/Report/Conference proceedingConference contribution

28 Citations (Scopus)

Abstract

The robustness and high performance of ASR is required for robot audition, because people usually speak to each other to communicate. This paper presents two-layered audio-visual integration to make automatic speech recognition (ASR) more robust against speaker's distance and interfering talkers or environmental noises. It consists of Audio-Visual Voice Activity Detection (AV-VAD) and Audio-Visual Speech Recognition (AVSR). The AV-VAD layer integrates several AV features based on a Bayesian network to robustly detect voice activity, or speaker's utterance duration. This is because the performance of VAD strongly affects that of ASR. The AVSR layer integrates the reliability estimation of acoustic features and that of visual features by using a missing-feature theory method. The reliability of audio features is more weighted in a clean acoustic environment, while that of visual features is more weighted in a noisy environment. This AVSR layer integration can cope with dynamically-changing environments in acoustics or vision. The proposed AV integrated ASR is implemented on HARK, our open-sourced robot audition software, with an 8ch microphone array. Empirical results show that our system improves 9.9 and 16.7 points of ASR results with/without microphone array processing, respectively, and also improves robustness against several auditory/visual noise conditions.

Original languageEnglish
Title of host publication9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09
Pages604-609
Number of pages6
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09 - Paris
Duration: 2009 Dec 72009 Dec 10

Other

Other9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09
CityParis
Period09/12/709/12/10

    Fingerprint

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Yoshida, T., Nakadai, K., & Okuno, H. G. (2009). Automatic speech recognition improved by two-layered audio-visual integration for robot audition. In 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09 (pp. 604-609). [5379586] https://doi.org/10.1109/ICHR.2009.5379586