Abstract
Robots should listen to and recognize speeches with their own ears under noisy environments and simultaneous speeches to attain smooth communications with people in a real world. This paper presents three simultaneous speech recognition based on active audition which integrates audition with motion. Our robot audition system consists of three modules - a real-time human tracking system, an active direction-pass filter (ADPF) and a speech recognition system using multiple acoustic models. The real-time human tracking system realizes robust and accurate sound source localization and tracking by audio-visual integration. The performance of localization shows that the resolution of the center of the robot is much higher than that of the peripheral. We call this phenomena "auditory fovea" because it is similar to visual fovea (high resolution in the center of human eye). Active motions such as being directed at the sound source improve localization because of making the best use of the auditory fovea. The ADPF realizes accurate and fast sound separation by using a pair of microphones. The ADPF separates sounds originating from the specified direction obtained by the real-time human tracking system. Because the performance of separation depends on the accuracy of localization, the extraction of sound from the front direction is more accurate than that of sound from the periphery. This means that the pass range of the ADPF should be narrower in the front direction than in the periphery. In other words, such active pass range control improves sound separation. The separated speech is recognized by the speech recognition using multiple acoustic models that integrates multiple results to output the result with the maximum likelihood. Active motions such as being directed at a sound source improve speech recognition because it realizes not only improvement of sound extraction but also easier integration of the results using face ID by face recognition. The robot audition system improved by active audition is implemented on an upper-torso humanoid. The system attains localization, separation and recognition of three simultaneous speeches and the results proves the efficiency of active audition.
Original language | English |
---|---|
Title of host publication | Proceedings - IEEE International Conference on Robotics and Automation |
Pages | 398-405 |
Number of pages | 8 |
Volume | 1 |
Publication status | Published - 2003 |
Externally published | Yes |
Event | 2003 IEEE International Conference on Robotics and Automation - Taipei, Taiwan, Province of China Duration: 2003 Sept 14 → 2003 Sept 19 |
Other
Other | 2003 IEEE International Conference on Robotics and Automation |
---|---|
Country/Territory | Taiwan, Province of China |
City | Taipei |
Period | 03/9/14 → 03/9/19 |
ASJC Scopus subject areas
- Software
- Control and Systems Engineering