Multilingual sequence-to-sequence speech recognition: Architecture, transfer learning, and language modeling

Jaejin Cho, Murali Karthick Baskar, Ruizhi Li, Matthew Wiesner, Sri Harish Mallidi, Nelson Yalta, Martin Karafiát, Shinji Watanabe, Takaaki Hori

Research output: Contribution to journalArticlepeer-review


Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we attempt to use data from 10 BABEL languages to build a multilingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach. We also explore different architectures for improving the prior multilingual seq2seq model. The paper also discusses the effect of integrating a recurrent neural network language model (RNNLM) with a seq2seq model during decoding. Experimental results show that the transfer learning approach from the multilingual model shows substantial gains over monolingual models across all 4 BABEL languages. Incorporating an RNNLM also brings significant improvements in terms of %WER, and achieves recognition performance comparable to the models trained with twice more training data.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2018 Oct 4


  • Automatic speech recognition (ASR)
  • Language modeling
  • Multilingual setup
  • Sequence to sequence
  • Transfer learning

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Multilingual sequence-to-sequence speech recognition: Architecture, transfer learning, and language modeling'. Together they form a unique fingerprint.

Cite this