Automatic allocation of training data for rapid prototyping of speech understanding based on multiple model combination

Kazunori Komatani, Masaki Katsumaru, Mikio Nakano, Kotaro Funakoshi, Tetsuya Ogata, Hiroshi G. Okuno

Research output: Contribution to conferencePaper

2 Citations (Scopus)

Abstract

The optimal choice of speech understanding method depends on the amount of training data available in rapid prototyping. A statistical method is ultimately chosen, but it is not clear at which point in the increase in training data a statistical method become effective. Our framework combines multiple automatic speech recognition (ASR) and language understanding (LU) modules to provide a set of speech understanding results and selects the best result among them. The issue is how to allocate training data to statistical modules and the selection module in order to avoid overfitting in training and obtain better performance. This paper presents an automatic training data allocation method that is based on the change in the coefficients of the logistic regression functions used in the selection module. Experimental evaluation showed that our allocation method outperformed baseline methods that use a single ASR module and a single LU module at every point while training data increase.

Original languageEnglish
Pages579-587
Number of pages9
Publication statusPublished - 2010 Dec 1
Externally publishedYes
Event23rd International Conference on Computational Linguistics, Coling 2010 - Beijing, China
Duration: 2010 Aug 232010 Aug 27

Conference

Conference23rd International Conference on Computational Linguistics, Coling 2010
CountryChina
CityBeijing
Period10/8/2310/8/27

ASJC Scopus subject areas

  • Language and Linguistics
  • Computational Theory and Mathematics
  • Linguistics and Language

Fingerprint Dive into the research topics of 'Automatic allocation of training data for rapid prototyping of speech understanding based on multiple model combination'. Together they form a unique fingerprint.

Cite this