Deep long short-term memory adaptive beamforming networks for multichannel robust speech recognition

Zhong Meng, Shinji Watanabe, John R. Hershey, Hakan Erdogan

Research output: Contribution to journalArticlepeer-review

Abstract

Far-field speech recognition in noisy and reverberant conditions remains a challenging problem despite recent deep learning breakthroughs. This problem is commonly addressed by acquiring a speech signal from multiple microphones and performing beamforming over them. In this paper, we propose to use a recurrent neural network with long short-term memory (LSTM) architecture to adaptively estimate real-time beamforming filter coefficients to cope with non-stationary environmental noise and dynamic nature of source and microphones positions which results in a set of time-varying room impulse responses. The LSTM adaptive beamformer is jointly trained with a deep LSTM acoustic model to predict senone labels. Further, we use hidden units in the deep LSTM acoustic model to assist in predicting the beamforming filter coefficients. The proposed system achieves 7.97% absolute gain over baseline systems with no beamforming on CHiME-3 real evaluation set.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2017 Nov 21
Externally publishedYes

Keywords

  • Beamforming
  • LSTM
  • Multichannel
  • Speech recognition

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Deep long short-term memory adaptive beamforming networks for multichannel robust speech recognition'. Together they form a unique fingerprint.

Cite this