Multi-class composite N-gram language model

Hirofumi Yamamoto, Shuntaro Isogai, Yoshinori Sagisaka

研究成果: Article

35 引用 (Scopus)

抜粋

A new language model is proposed to cope with the scarcity of training data. The proposed multi-class N-gram achieves an accurate word prediction capability and high reliability with a small number of model parameters by clustering words multi-dimensionally into classes, where the left and right context are independently treated. Each multiple class is assigned by a grouping process based on the left and right neighboring characteristics. Furthermore, by introducing frequent word successions to partially include higher order statistics, multi-class N-grams are extended to more efficient multi-class composite N-grams. In comparison to conventional word tri-grams, the multi-class composite N-grams achieved 9.5% lower perplexity and a 16% lower word error rate in a speech recognition experiment with a 40% smaller parameter size.

元の言語English
ページ(範囲)369-379
ページ数11
ジャーナルSpeech Communication
41
発行部数2-3
DOI
出版物ステータスPublished - 2003 10
外部発表Yes

    フィンガープリント

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Experimental and Cognitive Psychology
  • Linguistics and Language

これを引用