A practical two-stage training strategy for multi-stream end-to-end speech recognition

Ruizhi Li, Gregory Sell, Xiaofei Wang, Shinji Watanabe, Hynek Hermansky

Research output: Contribution to journalArticlepeer-review

Abstract

The multi-stream paradigm of audio processing, in which several sources are simultaneously considered, has been an active research area for information fusion. Our previous study offered a promising direction within end-to-end automatic speech recognition, where parallel encoders aim to capture diverse information followed by a stream-level fusion based on attention mechanisms to combine the different views. However, with an increasing number of streams resulting in an increasing number of encoders, the previous approach could require substantial memory and massive amounts of parallel data for joint training. In this work, we propose a practical two-stage training scheme. Stage-1 is to train a Universal Feature Extractor (UFE), where encoder outputs are produced from a single-stream model trained with all data. Stage-2 formulates a multi-stream scheme intending to solely train the attention fusion module using the UFE features and pretrained components from Stage-1. Experiments have been conducted on two datasets, DIRHA and AMI, as a multi-stream scenario. Compared with our previous method, this strategy achieves relative word error rate reductions of 8.2-32.4%, while consistently outperforming several conventional combination methods.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2019 Oct 23
Externally publishedYes

Keywords

  • End-to-End Speech Recognition
  • Multi-Stream
  • Multiple Microphone Array
  • Two-Stage Training

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'A practical two-stage training strategy for multi-stream end-to-end speech recognition'. Together they form a unique fingerprint.

Cite this