Soft missing-feature mask generation for Robot Audition

Toru Takahashi, Kazuhiro Nakadai, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno

研究成果: Article査読


This paper describes an improvement in automatic speech recognition (ASR) for robot audition by introducing Missing Feature Theory (MFT) based on soft missing feature masks (MFM) to realize natural human-robot interaction. In an everyday environment, a robot's microphones capture various sounds besides the user's utterances. Although sound-source separation is an effective way to enhance the user's utterances, it inevitably produces errors due to reflection and reverberation. MFT is able to cope with these errors. First, MFMs are generated based on the reliability of time-frequency components. Then ASR weighs the time-frequency components according to the MFMs. We propose a new method to automatically generate soft MFMs, consisting of continuous values from 0 to 1 based on a sigmoid function. The proposed MFM generation was implemented for HRP-2 using HARK, our open-sourced robot audition software. Preliminary results show that the soft MFM outperformed a hard (binary) MFM in recognizing three simultaneous utterances. In a human-robot interaction task, the interval limitations between two adjacent loudspeakers were reduced from 60 degrees to 30 degrees by using soft MFMs.

出版ステータスPublished - 2010 3月 1

ASJC Scopus subject areas

  • 人間とコンピュータの相互作用
  • 発達神経科学
  • 認知神経科学
  • 人工知能
  • 行動神経科学


「Soft missing-feature mask generation for Robot Audition」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。