### Abstract

Fast estimation algorithms for Hidden Markov models (HMMs) for given data are presented. These algorithms start from the alpha-EM algorithm which includes the traditional log-EM as its proper subset. Since existing or traditional HMMs are the outcome of the log-EM, it had been expected that the alpha-HMM would exist. In this paper, it is shown that this foresight is true by using methods of the iteration index shift and likelihood ratio expansion. In each iteration, new update equations utilize one-step past terms which are computed and stored during the previous maximization step. Therefore, iteration speedup directly appears as that of CPU time. Since the new method is theoretically based on the alpha-EM, all of its properties are inherited. There are eight types of alpha-HMMs derived. They are discrete, continuous, semi-continuous and discrete-continuous alpha-HMMs, and both for single and multiple sequences. Using the properties of the alpha-EM algorithm, the speedup property is theoretically analyzed. Experimental results including real world data are given.

Original language | English |
---|---|

Title of host publication | Proceedings of the International Joint Conference on Neural Networks |

Pages | 808-816 |

Number of pages | 9 |

DOIs | |

Publication status | Published - 2011 |

Event | 2011 International Joint Conference on Neural Network, IJCNN 2011 - San Jose, CA Duration: 2011 Jul 31 → 2011 Aug 5 |

### Other

Other | 2011 International Joint Conference on Neural Network, IJCNN 2011 |
---|---|

City | San Jose, CA |

Period | 11/7/31 → 11/8/5 |

### Fingerprint

### ASJC Scopus subject areas

- Software
- Artificial Intelligence

### Cite this

*Proceedings of the International Joint Conference on Neural Networks*(pp. 808-816). [6033304] https://doi.org/10.1109/IJCNN.2011.6033304

**Hidden Markov model estimation based on alpha-EM algorithm : Discrete and continuous alpha-HMMs.** / Matsuyama, Yasuo.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Proceedings of the International Joint Conference on Neural Networks.*, 6033304, pp. 808-816, 2011 International Joint Conference on Neural Network, IJCNN 2011, San Jose, CA, 11/7/31. https://doi.org/10.1109/IJCNN.2011.6033304

}

TY - GEN

T1 - Hidden Markov model estimation based on alpha-EM algorithm

T2 - Discrete and continuous alpha-HMMs

AU - Matsuyama, Yasuo

PY - 2011

Y1 - 2011

N2 - Fast estimation algorithms for Hidden Markov models (HMMs) for given data are presented. These algorithms start from the alpha-EM algorithm which includes the traditional log-EM as its proper subset. Since existing or traditional HMMs are the outcome of the log-EM, it had been expected that the alpha-HMM would exist. In this paper, it is shown that this foresight is true by using methods of the iteration index shift and likelihood ratio expansion. In each iteration, new update equations utilize one-step past terms which are computed and stored during the previous maximization step. Therefore, iteration speedup directly appears as that of CPU time. Since the new method is theoretically based on the alpha-EM, all of its properties are inherited. There are eight types of alpha-HMMs derived. They are discrete, continuous, semi-continuous and discrete-continuous alpha-HMMs, and both for single and multiple sequences. Using the properties of the alpha-EM algorithm, the speedup property is theoretically analyzed. Experimental results including real world data are given.

AB - Fast estimation algorithms for Hidden Markov models (HMMs) for given data are presented. These algorithms start from the alpha-EM algorithm which includes the traditional log-EM as its proper subset. Since existing or traditional HMMs are the outcome of the log-EM, it had been expected that the alpha-HMM would exist. In this paper, it is shown that this foresight is true by using methods of the iteration index shift and likelihood ratio expansion. In each iteration, new update equations utilize one-step past terms which are computed and stored during the previous maximization step. Therefore, iteration speedup directly appears as that of CPU time. Since the new method is theoretically based on the alpha-EM, all of its properties are inherited. There are eight types of alpha-HMMs derived. They are discrete, continuous, semi-continuous and discrete-continuous alpha-HMMs, and both for single and multiple sequences. Using the properties of the alpha-EM algorithm, the speedup property is theoretically analyzed. Experimental results including real world data are given.

UR - http://www.scopus.com/inward/record.url?scp=80054730021&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=80054730021&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2011.6033304

DO - 10.1109/IJCNN.2011.6033304

M3 - Conference contribution

AN - SCOPUS:80054730021

SN - 9781457710865

SP - 808

EP - 816

BT - Proceedings of the International Joint Conference on Neural Networks

ER -