A discriminative model for continuous speech recognition based on weighted finite state transducers

Shinji Watanabe, Takaaki Hori, Erik McDermott, Atsushi Nakamura

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

This paper proposes a discriminative model for speech recognition that directly optimizes the parameters of a speech model represented in the form of a decoding graph. In the process of recognition, a decoder, given an input speech signal, searches for an appropriate label sequence among possible combinations from separate knowledge sources of speech, e.g., acoustic, lexicon, and language models. It is more reasonable to use an integrated knowledge source, which is composed of these models and forms an overall space to be searched by a decoder, than to use separate ones. This paper aims to estimate a speech model composed in this way directly in the search network, unlike discriminative training approaches, which estimate parameters in acoustic or language model layers. Our approach is formulated as the weight parameter optimization of log-linear distributions in the decoding arcs of a Weighted Finite State Transducer (WFST) to efficiently handle a large network statically. The weight parameters are estimated by an averaged perceptron algorithm. The experimental results show that, especially when the model size is small, the proposed approach provided better recognition performance than the conventional maximum likelihood and comparable to or slightly better performance than discriminative training approaches.

Original languageEnglish
Title of host publication2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings
Pages4922-4925
Number of pages4
DOIs
Publication statusPublished - 2010
Externally publishedYes
Event2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Dallas, TX
Duration: 2010 Mar 142010 Mar 19

Other

Other2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010
CityDallas, TX
Period10/3/1410/3/19

Fingerprint

Continuous speech recognition
Transducers
Decoding
Acoustics
Speech recognition
Maximum likelihood
Labels
Neural networks

Keywords

  • Averaged perceptron
  • Discriminative model
  • Loglinear model
  • Parameter optimization in a decoding graph
  • Speech recognition
  • Weighted finite state transducer

ASJC Scopus subject areas

  • Signal Processing
  • Software
  • Electrical and Electronic Engineering

Cite this

Watanabe, S., Hori, T., McDermott, E., & Nakamura, A. (2010). A discriminative model for continuous speech recognition based on weighted finite state transducers. In 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings (pp. 4922-4925). [5495096] https://doi.org/10.1109/ICASSP.2010.5495096

A discriminative model for continuous speech recognition based on weighted finite state transducers. / Watanabe, Shinji; Hori, Takaaki; McDermott, Erik; Nakamura, Atsushi.

2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings. 2010. p. 4922-4925 5495096.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Watanabe, S, Hori, T, McDermott, E & Nakamura, A 2010, A discriminative model for continuous speech recognition based on weighted finite state transducers. in 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings., 5495096, pp. 4922-4925, 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010, Dallas, TX, 10/3/14. https://doi.org/10.1109/ICASSP.2010.5495096
Watanabe S, Hori T, McDermott E, Nakamura A. A discriminative model for continuous speech recognition based on weighted finite state transducers. In 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings. 2010. p. 4922-4925. 5495096 https://doi.org/10.1109/ICASSP.2010.5495096
Watanabe, Shinji ; Hori, Takaaki ; McDermott, Erik ; Nakamura, Atsushi. / A discriminative model for continuous speech recognition based on weighted finite state transducers. 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings. 2010. pp. 4922-4925
@inproceedings{473932fd082c4c1b96ee85dab14db2e6,
title = "A discriminative model for continuous speech recognition based on weighted finite state transducers",
abstract = "This paper proposes a discriminative model for speech recognition that directly optimizes the parameters of a speech model represented in the form of a decoding graph. In the process of recognition, a decoder, given an input speech signal, searches for an appropriate label sequence among possible combinations from separate knowledge sources of speech, e.g., acoustic, lexicon, and language models. It is more reasonable to use an integrated knowledge source, which is composed of these models and forms an overall space to be searched by a decoder, than to use separate ones. This paper aims to estimate a speech model composed in this way directly in the search network, unlike discriminative training approaches, which estimate parameters in acoustic or language model layers. Our approach is formulated as the weight parameter optimization of log-linear distributions in the decoding arcs of a Weighted Finite State Transducer (WFST) to efficiently handle a large network statically. The weight parameters are estimated by an averaged perceptron algorithm. The experimental results show that, especially when the model size is small, the proposed approach provided better recognition performance than the conventional maximum likelihood and comparable to or slightly better performance than discriminative training approaches.",
keywords = "Averaged perceptron, Discriminative model, Loglinear model, Parameter optimization in a decoding graph, Speech recognition, Weighted finite state transducer",
author = "Shinji Watanabe and Takaaki Hori and Erik McDermott and Atsushi Nakamura",
year = "2010",
doi = "10.1109/ICASSP.2010.5495096",
language = "English",
isbn = "9781424442966",
pages = "4922--4925",
booktitle = "2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings",

}

TY - GEN

T1 - A discriminative model for continuous speech recognition based on weighted finite state transducers

AU - Watanabe, Shinji

AU - Hori, Takaaki

AU - McDermott, Erik

AU - Nakamura, Atsushi

PY - 2010

Y1 - 2010

N2 - This paper proposes a discriminative model for speech recognition that directly optimizes the parameters of a speech model represented in the form of a decoding graph. In the process of recognition, a decoder, given an input speech signal, searches for an appropriate label sequence among possible combinations from separate knowledge sources of speech, e.g., acoustic, lexicon, and language models. It is more reasonable to use an integrated knowledge source, which is composed of these models and forms an overall space to be searched by a decoder, than to use separate ones. This paper aims to estimate a speech model composed in this way directly in the search network, unlike discriminative training approaches, which estimate parameters in acoustic or language model layers. Our approach is formulated as the weight parameter optimization of log-linear distributions in the decoding arcs of a Weighted Finite State Transducer (WFST) to efficiently handle a large network statically. The weight parameters are estimated by an averaged perceptron algorithm. The experimental results show that, especially when the model size is small, the proposed approach provided better recognition performance than the conventional maximum likelihood and comparable to or slightly better performance than discriminative training approaches.

AB - This paper proposes a discriminative model for speech recognition that directly optimizes the parameters of a speech model represented in the form of a decoding graph. In the process of recognition, a decoder, given an input speech signal, searches for an appropriate label sequence among possible combinations from separate knowledge sources of speech, e.g., acoustic, lexicon, and language models. It is more reasonable to use an integrated knowledge source, which is composed of these models and forms an overall space to be searched by a decoder, than to use separate ones. This paper aims to estimate a speech model composed in this way directly in the search network, unlike discriminative training approaches, which estimate parameters in acoustic or language model layers. Our approach is formulated as the weight parameter optimization of log-linear distributions in the decoding arcs of a Weighted Finite State Transducer (WFST) to efficiently handle a large network statically. The weight parameters are estimated by an averaged perceptron algorithm. The experimental results show that, especially when the model size is small, the proposed approach provided better recognition performance than the conventional maximum likelihood and comparable to or slightly better performance than discriminative training approaches.

KW - Averaged perceptron

KW - Discriminative model

KW - Loglinear model

KW - Parameter optimization in a decoding graph

KW - Speech recognition

KW - Weighted finite state transducer

UR - http://www.scopus.com/inward/record.url?scp=78049374440&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=78049374440&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2010.5495096

DO - 10.1109/ICASSP.2010.5495096

M3 - Conference contribution

AN - SCOPUS:78049374440

SN - 9781424442966

SP - 4922

EP - 4925

BT - 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010 - Proceedings

ER -