抄録
Robots should listen to and recognize speeches with their own ears under noisy environments and simultaneous speeches to attain smooth communications with people in a real world. This paper presents three simultaneous speech recognition based on active audition which integrates audition with motion. Our robot audition system consists of three modules - a real-time human tracking system, an active direction-pass filter (ADPF) and a speech recognition system using multiple acoustic models. The real-time human tracking system realizes robust and accurate sound source localization and tracking by audio-visual integration. The performance of localization shows that the resolution of the center of the robot is much higher than that of the peripheral. We call this phenomena "auditory fovea" because it is similar to visual fovea (high resolution in the center of human eye). Active motions such as being directed at the sound source improve localization because of making the best use of the auditory fovea. The ADPF realizes accurate and fast sound separation by using a pair of microphones. The ADPF separates sounds originating from the specified direction obtained by the real-time human tracking system. Because the performance of separation depends on the accuracy of localization, the extraction of sound from the front direction is more accurate than that of sound from the periphery. This means that the pass range of the ADPF should be narrower in the front direction than in the periphery. In other words, such active pass range control improves sound separation. The separated speech is recognized by the speech recognition using multiple acoustic models that integrates multiple results to output the result with the maximum likelihood. Active motions such as being directed at a sound source improve speech recognition because it realizes not only improvement of sound extraction but also easier integration of the results using face ID by face recognition. The robot audition system improved by active audition is implemented on an upper-torso humanoid. The system attains localization, separation and recognition of three simultaneous speeches and the results proves the efficiency of active audition.
本文言語 | English |
---|---|
ホスト出版物のタイトル | Proceedings - IEEE International Conference on Robotics and Automation |
ページ | 398-405 |
ページ数 | 8 |
巻 | 1 |
出版ステータス | Published - 2003 |
外部発表 | はい |
イベント | 2003 IEEE International Conference on Robotics and Automation - Taipei, Taiwan, Province of China 継続期間: 2003 9月 14 → 2003 9月 19 |
Other
Other | 2003 IEEE International Conference on Robotics and Automation |
---|---|
国/地域 | Taiwan, Province of China |
City | Taipei |
Period | 03/9/14 → 03/9/19 |
ASJC Scopus subject areas
- ソフトウェア
- 制御およびシステム工学