抄録
Audio-visual (AV) integration is one of the key ideas to improve perception in noisy real-world environments. This paper describes automatic speech recognition (ASR) to improve human-robot interaction based on AV integration. We developed AV-integrated ASR, which has two AV integration layers, that is, voice activity detection (VAD) and ASR. However, the system has three difficulties: 1) VAD and ASR have been separately studied although these processes are mutually dependent, 2) VAD and ASR assumed that high resolution images are available although this assumption never holds in the real world, and 3) an optimal weight between audio and visual stream was fixed while their reliabilities change according to environmental changes. To solve these problems, we propose a new VAD algorithm taking ASR characteristics into account, and a linear-regression-based optimal weight estimation method. We evaluate the algorithm for auditory-and/or visually-contaminated data. Preliminary results show that the robustness of VAD improved even when the resolution of the images is low, and the AVSR using estimated stream weight shows the effectiveness of AV integration.
本文言語 | English |
---|---|
ホスト出版物のタイトル | IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings |
ページ | 988-993 |
ページ数 | 6 |
DOI | |
出版ステータス | Published - 2010 |
外部発表 | はい |
イベント | 23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Taipei 継続期間: 2010 10月 18 → 2010 10月 22 |
Other
Other | 23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 |
---|---|
City | Taipei |
Period | 10/10/18 → 10/10/22 |
ASJC Scopus subject areas
- 人工知能
- 人間とコンピュータの相互作用
- 制御およびシステム工学