Soft missing-feature mask generation for Robot Audition

Toru Takahashi, Kazuhiro Nakadai, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno

Research output: Contribution to journalArticlepeer-review


This paper describes an improvement in automatic speech recognition (ASR) for robot audition by introducing Missing Feature Theory (MFT) based on soft missing feature masks (MFM) to realize natural human-robot interaction. In an everyday environment, a robot's microphones capture various sounds besides the user's utterances. Although sound-source separation is an effective way to enhance the user's utterances, it inevitably produces errors due to reflection and reverberation. MFT is able to cope with these errors. First, MFMs are generated based on the reliability of time-frequency components. Then ASR weighs the time-frequency components according to the MFMs. We propose a new method to automatically generate soft MFMs, consisting of continuous values from 0 to 1 based on a sigmoid function. The proposed MFM generation was implemented for HRP-2 using HARK, our open-sourced robot audition software. Preliminary results show that the soft MFM outperformed a hard (binary) MFM in recognizing three simultaneous utterances. In a human-robot interaction task, the interval limitations between two adjacent loudspeakers were reduced from 60 degrees to 30 degrees by using soft MFMs.

Original languageEnglish
Pages (from-to)37-47
Number of pages11
Issue number1
Publication statusPublished - 2010 Mar 1
Externally publishedYes


  • Automatic Speech Recognition
  • HARK
  • Missing-feature-Theory
  • Robot Audition
  • Simultaneous speech recognition
  • Soft mask generation
  • Sound localization
  • Sound source separation

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Developmental Neuroscience
  • Cognitive Neuroscience
  • Artificial Intelligence
  • Behavioral Neuroscience


Dive into the research topics of 'Soft missing-feature mask generation for Robot Audition'. Together they form a unique fingerprint.

Cite this