End-to-end Speech Recognition with Word-Based Rnn Language Models

Takaaki Hori, Jaejin Cho, Shinji Watanabe

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper investigates the impact of word-based RNN language models (RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR). In our prior work, we have proposed a multi-level LM, in which character-based and word-based RNN-LMs are combined in hybrid CTC/attention-based ASR. Although this multi-level approach achieves significant error reduction in the Wall Street Journal (WSJ) task, two different LMs need to be trained and used for decoding, which increase the computational cost and memory usage. In this paper, we further propose a novel word-based RNN-LM, which allows us to decode with only the word-based LM, where it provides look-ahead word probabilities to predict next characters instead of the character-based LM, leading competitive accuracy with less computation compared to the multi-level LM. We demonstrate the efficacy of the word-based RNN-LMs using a larger corpus, LibriSpeech, in addition to WSJ we used in the prior work. Furthermore, we show that the proposed model achieves 5.1 %WER for WSJ Eval'92 test set when the vocabulary size is increased, which is the best WER reported for end-to-end ASR systems on this benchmark.

Original languageEnglish
Title of host publication2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages389-396
Number of pages8
ISBN (Electronic)9781538643341
DOIs
Publication statusPublished - 2019 Feb 11
Event2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Athens, Greece
Duration: 2018 Dec 182018 Dec 21

Publication series

Name2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings

Conference

Conference2018 IEEE Spoken Language Technology Workshop, SLT 2018
CountryGreece
CityAthens
Period18/12/1818/12/21

Fingerprint

Speech recognition
language
Decoding
vocabulary
Data storage equipment
costs
performance
Costs

Keywords

  • attention decoder
  • connectionist temporal classification
  • decoding
  • End-to-end speech recognition
  • language modeling

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Linguistics and Language

Cite this

Hori, T., Cho, J., & Watanabe, S. (2019). End-to-end Speech Recognition with Word-Based Rnn Language Models. In 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings (pp. 389-396). [8639693] (2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/SLT.2018.8639693

End-to-end Speech Recognition with Word-Based Rnn Language Models. / Hori, Takaaki; Cho, Jaejin; Watanabe, Shinji.

2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. p. 389-396 8639693 (2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Hori, T, Cho, J & Watanabe, S 2019, End-to-end Speech Recognition with Word-Based Rnn Language Models. in 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings., 8639693, 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings, Institute of Electrical and Electronics Engineers Inc., pp. 389-396, 2018 IEEE Spoken Language Technology Workshop, SLT 2018, Athens, Greece, 18/12/18. https://doi.org/10.1109/SLT.2018.8639693
Hori T, Cho J, Watanabe S. End-to-end Speech Recognition with Word-Based Rnn Language Models. In 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2019. p. 389-396. 8639693. (2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings). https://doi.org/10.1109/SLT.2018.8639693
Hori, Takaaki ; Cho, Jaejin ; Watanabe, Shinji. / End-to-end Speech Recognition with Word-Based Rnn Language Models. 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 389-396 (2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings).
@inproceedings{756f485b04df4b3ba15524c42df297e3,
title = "End-to-end Speech Recognition with Word-Based Rnn Language Models",
abstract = "This paper investigates the impact of word-based RNN language models (RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR). In our prior work, we have proposed a multi-level LM, in which character-based and word-based RNN-LMs are combined in hybrid CTC/attention-based ASR. Although this multi-level approach achieves significant error reduction in the Wall Street Journal (WSJ) task, two different LMs need to be trained and used for decoding, which increase the computational cost and memory usage. In this paper, we further propose a novel word-based RNN-LM, which allows us to decode with only the word-based LM, where it provides look-ahead word probabilities to predict next characters instead of the character-based LM, leading competitive accuracy with less computation compared to the multi-level LM. We demonstrate the efficacy of the word-based RNN-LMs using a larger corpus, LibriSpeech, in addition to WSJ we used in the prior work. Furthermore, we show that the proposed model achieves 5.1 {\%}WER for WSJ Eval'92 test set when the vocabulary size is increased, which is the best WER reported for end-to-end ASR systems on this benchmark.",
keywords = "attention decoder, connectionist temporal classification, decoding, End-to-end speech recognition, language modeling",
author = "Takaaki Hori and Jaejin Cho and Shinji Watanabe",
year = "2019",
month = "2",
day = "11",
doi = "10.1109/SLT.2018.8639693",
language = "English",
series = "2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "389--396",
booktitle = "2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings",

}

TY - GEN

T1 - End-to-end Speech Recognition with Word-Based Rnn Language Models

AU - Hori, Takaaki

AU - Cho, Jaejin

AU - Watanabe, Shinji

PY - 2019/2/11

Y1 - 2019/2/11

N2 - This paper investigates the impact of word-based RNN language models (RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR). In our prior work, we have proposed a multi-level LM, in which character-based and word-based RNN-LMs are combined in hybrid CTC/attention-based ASR. Although this multi-level approach achieves significant error reduction in the Wall Street Journal (WSJ) task, two different LMs need to be trained and used for decoding, which increase the computational cost and memory usage. In this paper, we further propose a novel word-based RNN-LM, which allows us to decode with only the word-based LM, where it provides look-ahead word probabilities to predict next characters instead of the character-based LM, leading competitive accuracy with less computation compared to the multi-level LM. We demonstrate the efficacy of the word-based RNN-LMs using a larger corpus, LibriSpeech, in addition to WSJ we used in the prior work. Furthermore, we show that the proposed model achieves 5.1 %WER for WSJ Eval'92 test set when the vocabulary size is increased, which is the best WER reported for end-to-end ASR systems on this benchmark.

AB - This paper investigates the impact of word-based RNN language models (RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR). In our prior work, we have proposed a multi-level LM, in which character-based and word-based RNN-LMs are combined in hybrid CTC/attention-based ASR. Although this multi-level approach achieves significant error reduction in the Wall Street Journal (WSJ) task, two different LMs need to be trained and used for decoding, which increase the computational cost and memory usage. In this paper, we further propose a novel word-based RNN-LM, which allows us to decode with only the word-based LM, where it provides look-ahead word probabilities to predict next characters instead of the character-based LM, leading competitive accuracy with less computation compared to the multi-level LM. We demonstrate the efficacy of the word-based RNN-LMs using a larger corpus, LibriSpeech, in addition to WSJ we used in the prior work. Furthermore, we show that the proposed model achieves 5.1 %WER for WSJ Eval'92 test set when the vocabulary size is increased, which is the best WER reported for end-to-end ASR systems on this benchmark.

KW - attention decoder

KW - connectionist temporal classification

KW - decoding

KW - End-to-end speech recognition

KW - language modeling

UR - http://www.scopus.com/inward/record.url?scp=85063107807&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063107807&partnerID=8YFLogxK

U2 - 10.1109/SLT.2018.8639693

DO - 10.1109/SLT.2018.8639693

M3 - Conference contribution

T3 - 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings

SP - 389

EP - 396

BT - 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -