Structural bayesian linear regression for hidden markov models

Shinji Watanabe, Atsushi Nakamura, Biing Hwang Juang

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

Linear regression for Hidden Markov Model (HMM) parameters is widely used for the adaptive training of time series pattern analysis especially for speech processing. The regression parameters are usually shared among sets of Gaussians in HMMs where the Gaussian clusters are represented by a tree. This paper realizes a fully Bayesian treatment of linear regression for HMMs considering this regression tree structure by using variational techniques. This paper analytically derives the variational lower bound of the marginalized log-likelihood of the linear regression. By using the variational lower bound as an objective function, we can algorithmically optimize the tree structure and hyper-parameters of the linear regression rather than heuristically tweaking them as tuning parameters. Experiments on large vocabulary continuous speech recognition confirm the generalizability of the proposed approach, especially when the amount of adaptation data is limited.

Original languageEnglish
Pages (from-to)341-358
Number of pages18
JournalJournal of Signal Processing Systems
Volume74
Issue number3
DOIs
Publication statusPublished - 2014
Externally publishedYes

Fingerprint

Hidden Markov models
Linear regression
Markov Model
Tree Structure
Lower bound
Variational techniques
Continuous speech recognition
Regression Tree
Speech Processing
Speech processing
Pattern Analysis
Hyperparameters
Time Series Analysis
Parameter Tuning
Speech Recognition
Time series
Likelihood
Tuning
Objective function
Regression

Keywords

  • Hidden Markov model
  • Linear regression
  • Structural prior
  • Variational bayes

ASJC Scopus subject areas

  • Hardware and Architecture
  • Information Systems
  • Signal Processing
  • Theoretical Computer Science
  • Control and Systems Engineering
  • Modelling and Simulation

Cite this

Structural bayesian linear regression for hidden markov models. / Watanabe, Shinji; Nakamura, Atsushi; Juang, Biing Hwang.

In: Journal of Signal Processing Systems, Vol. 74, No. 3, 2014, p. 341-358.

Research output: Contribution to journalArticle

Watanabe, Shinji ; Nakamura, Atsushi ; Juang, Biing Hwang. / Structural bayesian linear regression for hidden markov models. In: Journal of Signal Processing Systems. 2014 ; Vol. 74, No. 3. pp. 341-358.
@article{733ab31356e34e0eb7ba0f189087e985,
title = "Structural bayesian linear regression for hidden markov models",
abstract = "Linear regression for Hidden Markov Model (HMM) parameters is widely used for the adaptive training of time series pattern analysis especially for speech processing. The regression parameters are usually shared among sets of Gaussians in HMMs where the Gaussian clusters are represented by a tree. This paper realizes a fully Bayesian treatment of linear regression for HMMs considering this regression tree structure by using variational techniques. This paper analytically derives the variational lower bound of the marginalized log-likelihood of the linear regression. By using the variational lower bound as an objective function, we can algorithmically optimize the tree structure and hyper-parameters of the linear regression rather than heuristically tweaking them as tuning parameters. Experiments on large vocabulary continuous speech recognition confirm the generalizability of the proposed approach, especially when the amount of adaptation data is limited.",
keywords = "Hidden Markov model, Linear regression, Structural prior, Variational bayes",
author = "Shinji Watanabe and Atsushi Nakamura and Juang, {Biing Hwang}",
year = "2014",
doi = "10.1007/s11265-013-0785-8",
language = "English",
volume = "74",
pages = "341--358",
journal = "Journal of Signal Processing Systems",
issn = "1939-8018",
publisher = "Springer New York",
number = "3",

}

TY - JOUR

T1 - Structural bayesian linear regression for hidden markov models

AU - Watanabe, Shinji

AU - Nakamura, Atsushi

AU - Juang, Biing Hwang

PY - 2014

Y1 - 2014

N2 - Linear regression for Hidden Markov Model (HMM) parameters is widely used for the adaptive training of time series pattern analysis especially for speech processing. The regression parameters are usually shared among sets of Gaussians in HMMs where the Gaussian clusters are represented by a tree. This paper realizes a fully Bayesian treatment of linear regression for HMMs considering this regression tree structure by using variational techniques. This paper analytically derives the variational lower bound of the marginalized log-likelihood of the linear regression. By using the variational lower bound as an objective function, we can algorithmically optimize the tree structure and hyper-parameters of the linear regression rather than heuristically tweaking them as tuning parameters. Experiments on large vocabulary continuous speech recognition confirm the generalizability of the proposed approach, especially when the amount of adaptation data is limited.

AB - Linear regression for Hidden Markov Model (HMM) parameters is widely used for the adaptive training of time series pattern analysis especially for speech processing. The regression parameters are usually shared among sets of Gaussians in HMMs where the Gaussian clusters are represented by a tree. This paper realizes a fully Bayesian treatment of linear regression for HMMs considering this regression tree structure by using variational techniques. This paper analytically derives the variational lower bound of the marginalized log-likelihood of the linear regression. By using the variational lower bound as an objective function, we can algorithmically optimize the tree structure and hyper-parameters of the linear regression rather than heuristically tweaking them as tuning parameters. Experiments on large vocabulary continuous speech recognition confirm the generalizability of the proposed approach, especially when the amount of adaptation data is limited.

KW - Hidden Markov model

KW - Linear regression

KW - Structural prior

KW - Variational bayes

UR - http://www.scopus.com/inward/record.url?scp=84897393748&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84897393748&partnerID=8YFLogxK

U2 - 10.1007/s11265-013-0785-8

DO - 10.1007/s11265-013-0785-8

M3 - Article

VL - 74

SP - 341

EP - 358

JO - Journal of Signal Processing Systems

JF - Journal of Signal Processing Systems

SN - 1939-8018

IS - 3

ER -