Analysis of multilingual sequence-to-sequence speech recognition systems

Martin Karafiát, Murali Karthick Baskar, Shinji Watanabe, Takaaki Hori, Matthew Wiesner, Jan Černocký

Research output: Contribution to journalArticlepeer-review

Abstract

This paper investigates the applications of various multilingual approaches developed in conventional hidden Markov model (HMM) systems to sequence-to-sequence (seq2seq) automatic speech recognition (ASR). On a set composed of Babel data, we first show the effectiveness of multi-lingual training with stacked bottle-neck (SBN) features. Then we explore various architectures and training strategies of multi-lingual seq2seq models based on CTC-attention networks including combinations of output layer, CTC and/or attention component re-training. We also investigate the effectiveness of language-transfer learning in a very low resource scenario when the target language is not included in the original multi-lingual training data. Interestingly, we found multilingual features superior to multilingual models, and this finding suggests that we can efficiently combine the benefits of the HMM system with the seq2seq system through these multilingual feature techniques.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2018 Nov 7

Keywords

  • ASR
  • CTC
  • Language-transfer
  • Multilingual training
  • Sequence-to-sequence

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Analysis of multilingual sequence-to-sequence speech recognition systems'. Together they form a unique fingerprint.

Cite this