Towards online end-to-end transformer automatic speech recognition

Emiru Tsunoo, Yosuke Kashiwagi, Toshiyuki Kumakura, Shinji Watanabe

Research output: Contribution to journalArticlepeer-review

Abstract

The Transformer self-attention network has recently shown promising performance as an alternative to recurrent neural networks in end-to-end (E2E) automatic speech recognition (ASR) systems. However, Transformer has a drawback in that the entire input sequence is required to compute self-attention. We have proposed a block processing method for the Transformer encoder by introducing a context-aware inheritance mechanism. An additional context embedding vector handed over from the previously processed block helps to encode not only local acoustic information but also global linguistic, channel, and speaker attributes. In this paper, we extend it towards an entire online E2E ASR system by introducing an online decoding process inspired by monotonic chunkwise attention (MoChA) into the Transformer decoder. Our novel MoChA training and inference algorithms exploit the unique properties of Transformer, whose attentions are not always monotonic or peaky, and have multiple heads and residual connections of the decoder layers. Evaluations of the Wall Street Journal (WSJ) and AISHELL-1 show that our proposed online Transformer decoder outperforms conventional chunkwise approaches.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2019 Oct 25
Externally publishedYes

Keywords

  • End-to-end
  • Monotonic Chunkwise Attention
  • Self-attention Network
  • Speech Recognition
  • Transformer

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Towards online end-to-end transformer automatic speech recognition'. Together they form a unique fingerprint.

Cite this