Intermediate loss regularization for CTC-based speech recognition

Jaesong Lee, Shinji Watanabe

研究成果: Conference article査読

4 被引用数 (Scopus)

抄録

We present a simple and efficient auxiliary loss function for automatic speech recognition (ASR) based on the connectionist temporal classification (CTC) objective. The proposed objective, an intermediate CTC loss, is attached to an intermediate layer in the CTC encoder network. This intermediate CTC loss well regularizes CTC training and improves the performance requiring only small modification of the code and small and no overhead during training and inference, respectively. In addition, we propose to combine this intermediate CTC loss with stochastic depth training, and apply this combination to a recently proposed Conformer network. We evaluate the proposed method on various corpora, reaching word error rate (WER) 9.9% on the WSJ corpus and character error rate (CER) 5.2% on the AISHELL-1 corpus respectively, based on CTC greedy search without a language model. Especially, the AISHELL-1 task is comparable to other state-of-the-art ASR systems based on autoregressive decoder with beam search.

本文言語English
ページ(範囲)6224-6228
ページ数5
ジャーナルICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2021-June
DOI
出版ステータスPublished - 2021
外部発表はい
イベント2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada
継続期間: 2021 6 62021 6 11

ASJC Scopus subject areas

  • ソフトウェア
  • 信号処理
  • 電子工学および電気工学

フィンガープリント

「Intermediate loss regularization for CTC-based speech recognition」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル