Robot recognizes three simultaneous speech by active audition

Kazuhiro Nakadai, Hiroshi G. Okuno, Hiroaki Kitano

Research output: Chapter in Book/Report/Conference proceedingConference contribution

31 Citations (Scopus)

Abstract

Robots should listen to and recognize speeches with their own ears under noisy environments and simultaneous speeches to attain smooth communications with people in a real world. This paper presents three simultaneous speech recognition based on active audition which integrates audition with motion. Our robot audition system consists of three modules - a real-time human tracking system, an active direction-pass filter (ADPF) and a speech recognition system using multiple acoustic models. The real-time human tracking system realizes robust and accurate sound source localization and tracking by audio-visual integration. The performance of localization shows that the resolution of the center of the robot is much higher than that of the peripheral. We call this phenomena "auditory fovea" because it is similar to visual fovea (high resolution in the center of human eye). Active motions such as being directed at the sound source improve localization because of making the best use of the auditory fovea. The ADPF realizes accurate and fast sound separation by using a pair of microphones. The ADPF separates sounds originating from the specified direction obtained by the real-time human tracking system. Because the performance of separation depends on the accuracy of localization, the extraction of sound from the front direction is more accurate than that of sound from the periphery. This means that the pass range of the ADPF should be narrower in the front direction than in the periphery. In other words, such active pass range control improves sound separation. The separated speech is recognized by the speech recognition using multiple acoustic models that integrates multiple results to output the result with the maximum likelihood. Active motions such as being directed at a sound source improve speech recognition because it realizes not only improvement of sound extraction but also easier integration of the results using face ID by face recognition. The robot audition system improved by active audition is implemented on an upper-torso humanoid. The system attains localization, separation and recognition of three simultaneous speeches and the results proves the efficiency of active audition.

Original languageEnglish
Title of host publicationProceedings - IEEE International Conference on Robotics and Automation
Pages398-405
Number of pages8
Volume1
Publication statusPublished - 2003
Externally publishedYes
Event2003 IEEE International Conference on Robotics and Automation - Taipei, Taiwan, Province of China
Duration: 2003 Sep 142003 Sep 19

Other

Other2003 IEEE International Conference on Robotics and Automation
CountryTaiwan, Province of China
CityTaipei
Period03/9/1403/9/19

Fingerprint

Audition
Acoustic waves
Robots
Speech recognition
Acoustics
Microphones
Face recognition
Maximum likelihood
Communication

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering

Cite this

Nakadai, K., Okuno, H. G., & Kitano, H. (2003). Robot recognizes three simultaneous speech by active audition. In Proceedings - IEEE International Conference on Robotics and Automation (Vol. 1, pp. 398-405)

Robot recognizes three simultaneous speech by active audition. / Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki.

Proceedings - IEEE International Conference on Robotics and Automation. Vol. 1 2003. p. 398-405.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Nakadai, K, Okuno, HG & Kitano, H 2003, Robot recognizes three simultaneous speech by active audition. in Proceedings - IEEE International Conference on Robotics and Automation. vol. 1, pp. 398-405, 2003 IEEE International Conference on Robotics and Automation, Taipei, Taiwan, Province of China, 03/9/14.
Nakadai K, Okuno HG, Kitano H. Robot recognizes three simultaneous speech by active audition. In Proceedings - IEEE International Conference on Robotics and Automation. Vol. 1. 2003. p. 398-405
Nakadai, Kazuhiro ; Okuno, Hiroshi G. ; Kitano, Hiroaki. / Robot recognizes three simultaneous speech by active audition. Proceedings - IEEE International Conference on Robotics and Automation. Vol. 1 2003. pp. 398-405
@inproceedings{4644d3aee2b4431ba48acfeb756e676c,
title = "Robot recognizes three simultaneous speech by active audition",
abstract = "Robots should listen to and recognize speeches with their own ears under noisy environments and simultaneous speeches to attain smooth communications with people in a real world. This paper presents three simultaneous speech recognition based on active audition which integrates audition with motion. Our robot audition system consists of three modules - a real-time human tracking system, an active direction-pass filter (ADPF) and a speech recognition system using multiple acoustic models. The real-time human tracking system realizes robust and accurate sound source localization and tracking by audio-visual integration. The performance of localization shows that the resolution of the center of the robot is much higher than that of the peripheral. We call this phenomena {"}auditory fovea{"} because it is similar to visual fovea (high resolution in the center of human eye). Active motions such as being directed at the sound source improve localization because of making the best use of the auditory fovea. The ADPF realizes accurate and fast sound separation by using a pair of microphones. The ADPF separates sounds originating from the specified direction obtained by the real-time human tracking system. Because the performance of separation depends on the accuracy of localization, the extraction of sound from the front direction is more accurate than that of sound from the periphery. This means that the pass range of the ADPF should be narrower in the front direction than in the periphery. In other words, such active pass range control improves sound separation. The separated speech is recognized by the speech recognition using multiple acoustic models that integrates multiple results to output the result with the maximum likelihood. Active motions such as being directed at a sound source improve speech recognition because it realizes not only improvement of sound extraction but also easier integration of the results using face ID by face recognition. The robot audition system improved by active audition is implemented on an upper-torso humanoid. The system attains localization, separation and recognition of three simultaneous speeches and the results proves the efficiency of active audition.",
author = "Kazuhiro Nakadai and Okuno, {Hiroshi G.} and Hiroaki Kitano",
year = "2003",
language = "English",
volume = "1",
pages = "398--405",
booktitle = "Proceedings - IEEE International Conference on Robotics and Automation",

}

TY - GEN

T1 - Robot recognizes three simultaneous speech by active audition

AU - Nakadai, Kazuhiro

AU - Okuno, Hiroshi G.

AU - Kitano, Hiroaki

PY - 2003

Y1 - 2003

N2 - Robots should listen to and recognize speeches with their own ears under noisy environments and simultaneous speeches to attain smooth communications with people in a real world. This paper presents three simultaneous speech recognition based on active audition which integrates audition with motion. Our robot audition system consists of three modules - a real-time human tracking system, an active direction-pass filter (ADPF) and a speech recognition system using multiple acoustic models. The real-time human tracking system realizes robust and accurate sound source localization and tracking by audio-visual integration. The performance of localization shows that the resolution of the center of the robot is much higher than that of the peripheral. We call this phenomena "auditory fovea" because it is similar to visual fovea (high resolution in the center of human eye). Active motions such as being directed at the sound source improve localization because of making the best use of the auditory fovea. The ADPF realizes accurate and fast sound separation by using a pair of microphones. The ADPF separates sounds originating from the specified direction obtained by the real-time human tracking system. Because the performance of separation depends on the accuracy of localization, the extraction of sound from the front direction is more accurate than that of sound from the periphery. This means that the pass range of the ADPF should be narrower in the front direction than in the periphery. In other words, such active pass range control improves sound separation. The separated speech is recognized by the speech recognition using multiple acoustic models that integrates multiple results to output the result with the maximum likelihood. Active motions such as being directed at a sound source improve speech recognition because it realizes not only improvement of sound extraction but also easier integration of the results using face ID by face recognition. The robot audition system improved by active audition is implemented on an upper-torso humanoid. The system attains localization, separation and recognition of three simultaneous speeches and the results proves the efficiency of active audition.

AB - Robots should listen to and recognize speeches with their own ears under noisy environments and simultaneous speeches to attain smooth communications with people in a real world. This paper presents three simultaneous speech recognition based on active audition which integrates audition with motion. Our robot audition system consists of three modules - a real-time human tracking system, an active direction-pass filter (ADPF) and a speech recognition system using multiple acoustic models. The real-time human tracking system realizes robust and accurate sound source localization and tracking by audio-visual integration. The performance of localization shows that the resolution of the center of the robot is much higher than that of the peripheral. We call this phenomena "auditory fovea" because it is similar to visual fovea (high resolution in the center of human eye). Active motions such as being directed at the sound source improve localization because of making the best use of the auditory fovea. The ADPF realizes accurate and fast sound separation by using a pair of microphones. The ADPF separates sounds originating from the specified direction obtained by the real-time human tracking system. Because the performance of separation depends on the accuracy of localization, the extraction of sound from the front direction is more accurate than that of sound from the periphery. This means that the pass range of the ADPF should be narrower in the front direction than in the periphery. In other words, such active pass range control improves sound separation. The separated speech is recognized by the speech recognition using multiple acoustic models that integrates multiple results to output the result with the maximum likelihood. Active motions such as being directed at a sound source improve speech recognition because it realizes not only improvement of sound extraction but also easier integration of the results using face ID by face recognition. The robot audition system improved by active audition is implemented on an upper-torso humanoid. The system attains localization, separation and recognition of three simultaneous speeches and the results proves the efficiency of active audition.

UR - http://www.scopus.com/inward/record.url?scp=0345307744&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0345307744&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0345307744

VL - 1

SP - 398

EP - 405

BT - Proceedings - IEEE International Conference on Robotics and Automation

ER -