Non-Autoregressive Transformer for Speech Recognition

Nanxin Chen, Shinji Watanabe, Jesus Villalba, Piotr Zelasko, Najim Dehak

研究成果: Article査読

5 被引用数 (Scopus)

抄録

Very deep transformers outperform conventional bi-directional long short-term memory networks for automatic speech recognition (ASR) by a significant margin. However, being autoregressive models, their computational complexity is still a prohibitive factor in their deployment into production systems. To amend this problem, we study two different non-autoregressive transformer structures for ASR: Audio-Conditional Masked Language Model (A-CMLM) and Audio-Factorized Masked Language Model (A-FMLM). When training these frameworks, the decoder input tokens are randomly replaced by special mask tokens. Then, the network is optimized to predict the masked tokens by taking both the unmasked context tokens and the input speech into consideration. During inference, we start from all masked tokens and the network iteratively predicts missing tokens based on partial results. A new decoding strategy is proposed as an example, which starts from the most confident predictions to the rest. Results on Mandarin (AISHELL), Japanese (CSJ), English (LibriSpeech) benchmarks show promising results to train such a non-autoregressive network for ASR. Especially in AISHELL, the proposed method outperformed the Kaldi ASR system and matched the performance of the state-of-the-art autoregressive transformer with 7\times speedup.

本文言語English
論文番号9292943
ページ(範囲)121-125
ページ数5
ジャーナルIEEE Signal Processing Letters
28
DOI
出版ステータスPublished - 2021
外部発表はい

ASJC Scopus subject areas

  • 信号処理
  • 電子工学および電気工学
  • 応用数学

フィンガープリント

「Non-Autoregressive Transformer for Speech Recognition」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル