Ears of the robot: Three simultaneous speech segregation and recognition using robot-mounted microphones

Naoya Mochiki*, Tetsuji Ogawa, Tetsunori Kobayashi

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

A new type of sound source segregation method using robot-mounted microphones, which are free from strict head related transfer function (HRTF) estimation, has been proposed and successfully applied to three simultaneous speech recognition systems. The proposed segregation method is executed with sound intensity differences that are due to the particular arrangement of the four directivity microphones and the existence of a robot head acting as a sound barrier. The proposed method consists of three-layered signal processing: two-line SAFIA (binary masking based on the narrow band sound intensity comparison), two-line spectral subtraction and their integration. We performed 20 K vocabulary continuous speech recognition test in the presence of three speakers' simultaneous talk, and achieved more than 70% word error reduction compared with the case without any segregation processing.

Original languageEnglish
Pages (from-to)1465-1468
Number of pages4
JournalIEICE Transactions on Information and Systems
VolumeE90-D
Issue number9
DOIs
Publication statusPublished - 2007 Sept

Keywords

  • Robot audition
  • SAFIA
  • Sound source segregation
  • Spectral subtraction
  • Speech recognition

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Ears of the robot: Three simultaneous speech segregation and recognition using robot-mounted microphones'. Together they form a unique fingerprint.

Cite this