Auditory fovea based speech separation and its application to dialog system

Kazuhiro Nakadai, Hiroshi G. Okuno, Hiroaki Kitano

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Citations (Scopus)

Abstract

Robots, in particular, mobile robots should listen to and recognize speeches with their own ears in a real world to attain smooth communications with people. This paper presents an active direction-pass filter (ADPF) that separates sounds originating from the specified direction by using a pair of microphones. Its application to front-end processing for speech recognition is also reported. Since the performance of sound source separation by the ADPF depends on the accuracy of sound source localization (direction), various localization modules including interaural phase difference (IPD), interaural intensity difference (IID) for each sub-band, other visual and auditory processing is integrated hierarchically. The resulting performance of auditory localization varies according to the relative position of sound source. The resolution of the center of the robot is much higher than that of peripherals, indicating similar property of visual fovea (high resolution in the center of human eye). To make the best use of this property, the ADPF controls the direction of a head by motor movement. In order to recognize sound streams separated by the ADPF, a Hidden Markov Model (HMM) based automatic speech recognition is built with multiple acoustic models trained by the output of the ADPF under different conditions. A preliminary dialog system is thus implemented on an upper-torso humanoid. The experimental results prove that it works well even when two speakers speak simultaneously.

Original languageEnglish
Title of host publicationIEEE International Conference on Intelligent Robots and Systems
Pages1320-1325
Number of pages6
Volume2
Publication statusPublished - 2002
Externally publishedYes
Event2002 IEEE/RSJ International Conference on Intelligent Robots and Systems - Lausanne
Duration: 2002 Sep 302002 Oct 4

Other

Other2002 IEEE/RSJ International Conference on Intelligent Robots and Systems
CityLausanne
Period02/9/3002/10/4

ASJC Scopus subject areas

  • Control and Systems Engineering

Fingerprint Dive into the research topics of 'Auditory fovea based speech separation and its application to dialog system'. Together they form a unique fingerprint.

  • Cite this

    Nakadai, K., Okuno, H. G., & Kitano, H. (2002). Auditory fovea based speech separation and its application to dialog system. In IEEE International Conference on Intelligent Robots and Systems (Vol. 2, pp. 1320-1325)