Abstract
The robustness and high performance of ASR is required for robot audition, because people usually speak to each other to communicate. This paper presents two-layered audio-visual integration to make automatic speech recognition (ASR) more robust against speaker's distance and interfering talkers or environmental noises. It consists of Audio-Visual Voice Activity Detection (AV-VAD) and Audio-Visual Speech Recognition (AVSR). The AV-VAD layer integrates several AV features based on a Bayesian network to robustly detect voice activity, or speaker's utterance duration. This is because the performance of VAD strongly affects that of ASR. The AVSR layer integrates the reliability estimation of acoustic features and that of visual features by using a missing-feature theory method. The reliability of audio features is more weighted in a clean acoustic environment, while that of visual features is more weighted in a noisy environment. This AVSR layer integration can cope with dynamically-changing environments in acoustics or vision. The proposed AV integrated ASR is implemented on HARK, our open-sourced robot audition software, with an 8ch microphone array. Empirical results show that our system improves 9.9 and 16.7 points of ASR results with/without microphone array processing, respectively, and also improves robustness against several auditory/visual noise conditions.
Original language | English |
---|---|
Title of host publication | 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09 |
Pages | 604-609 |
Number of pages | 6 |
DOIs | |
Publication status | Published - 2009 |
Externally published | Yes |
Event | 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09 - Paris Duration: 2009 Dec 7 → 2009 Dec 10 |
Other
Other | 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09 |
---|---|
City | Paris |
Period | 09/12/7 → 09/12/10 |
ASJC Scopus subject areas
- Computer Science(all)