Vectorization of hypotheses and speech for faster beam search in encoder decoder-based speech recognition

Hiroshi Seki, Takaaki Hori, Shinji Watanabe

Research output: Contribution to journalArticlepeer-review

Abstract

Attention-based encoder decoder network uses a left-to-right beam search algorithm in the inference step. The current beam search expands hypotheses and traverses the expanded hypotheses at the next time step. This traversal is implemented using a for-loop program in general, and it leads to speed down of the recognition process. In this paper, we propose a parallelism technique for beam search, which accelerates the search process by vectorizing multiple hypotheses to eliminate the for-loop program. We also propose a technique to batch multiple speech utterances for off-line recognition use, which reduces the for-loop program with regard to the traverse of multiple utterances. This extension is not trivial during beam search unlike during training due to several pruning and thresholding techniques for efficient decoding. In addition, our method can combine scores of external modules, RNNLMand CTC, in a batch as shallow fusion. We achieved 3.7× speedup compared with the original beam search algorithm by vectoring hypotheses, and achieved 10.5× speedup by further changing processing unit to GPU.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2018 Nov 12
Externally publishedYes

Keywords

  • Beam search
  • Encoder decoder network
  • GPU
  • Parallel computing
  • Speech recognition

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Vectorization of hypotheses and speech for faster beam search in encoder decoder-based speech recognition'. Together they form a unique fingerprint.

Cite this