Multistream sparse representation features for noise robust audio-visual speech recognition

Peng Shen, Satoshi Tamura, Satoru Hayamizu

研究成果: Article査読

4 被引用数 (Scopus)


In this paper, we propose to use exemplar-based sparse representation features for noise robust audio-visual speech recognition. First, we introduce a sparse representation technology and describe how noise robustness can be realized by the sparse representation for noise reduction. Then, feature fusion methods are proposed to combine audio-visual features with the sparse representation. Our work provides new insight into two crucial issues in automatic speech recognition: noise reduction and robust audio-visual features. For noise reduction, we describe a noise reduction method in which speech and noise are mapped into different subspaces by the sparse representation to reduce the noise. Our proposed method can be deployed not only on audio noise reduction but also on visual noise reduction for several types of noise. For the second issue, we investigate two feature fusion methods-late feature fusion and the joint sparsity model method-to calculate audio-visual sparse representation features to improve the accuracy of the audio-visual speech recognition. Our proposed method can also contribute to feature fusion for the audio-visual speech recognition system. Finally, to evaluate the new sparse representation features, a database for audio-visual speech recognition is used in this research. We show the effectiveness of our proposed noise reduction on both audio and visual cases for several types of noise and the effectiveness of audio-visual feature determination by the joint sparsity model, in comparison with the late feature fusion method and traditional methods.

ジャーナルAcoustical Science and Technology
出版ステータスPublished - 2014

ASJC Scopus subject areas

  • 音響学および超音波学


「Multistream sparse representation features for noise robust audio-visual speech recognition」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。