Statistical language modeling with a class-based n-multigram model

Sabine Deligne, Yoshinori Sagisaka

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

In this paper, we present a stochastic language-modeling tool which aims at retrieving variable-length phrases (multigrams), assuming n-gram dependencies between them, hence the name of the model: n-multigram. The estimation of the probability distribution of the phrases is intermixed with a phrase-clustering procedure in a way which jointly optimizes the likelihood of the data. As a result, the language data are iteratively structured at both a paradigmatic and a syntagmatic level in a fully integrated way. We evaluate the 2-multigram model as a statistical language model on ATIS, a task-oriented database consisting of air travel reservations. Experiments show that the 2-multigrarn model allows a reduction of 10% of the word error rate on ATIS with respect to the usual trigram model, with 25% fewer parameters than in the trigram model. In addition, we illustrate the ability of this model to merge semantically related phrases of different lengths into a common class.

Original languageEnglish
Pages (from-to)261-279
Number of pages19
JournalComputer Speech and Language
Volume14
Issue number3
DOIs
Publication statusPublished - 2000 Jul
Externally publishedYes

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Statistical language modeling with a class-based n-multigram model'. Together they form a unique fingerprint.

Cite this