Alpha-EM gives fast hidden Markov model estimation: Derivation and evaluation of alpha-HMM

Yasuo Matsuyama, Ryunosuke Hayashi

    研究成果: Conference contribution

    5 引用 (Scopus)

    抄録

    A fast learning algorithm for Hidden Markov Models is derived starting from convex divergence optimization. This method utilizes the alpha-logarithm as a surrogate function for the traditional logarithm to process the likelihood ratio. This enables the utilization of a stronger curvature than the logarithm. This paper's method includes the ordinary Baum-Welch re-estimation algorithm as a proper subset. The presented algorithm shows fast learning by utilizing time-shifted information during the progress of iterations. The computational complexity of this algorithm, which directly affects the CPU time, remains almost the same as the logarithmic one since only stored results are utilized for the speedup. Software implementation and speed are examined in the test data. The results showed that the presented method is creditable.

    元の言語English
    ホスト出版物のタイトルProceedings of the International Joint Conference on Neural Networks
    DOI
    出版物ステータスPublished - 2010
    イベント2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010 - Barcelona
    継続期間: 2010 7 182010 7 23

    Other

    Other2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010
    Barcelona
    期間10/7/1810/7/23

    Fingerprint

    Hidden Markov models
    Convex optimization
    Set theory
    Learning algorithms
    Program processors
    Computational complexity

    ASJC Scopus subject areas

    • Software
    • Artificial Intelligence

    これを引用

    Matsuyama, Y., & Hayashi, R. (2010). Alpha-EM gives fast hidden Markov model estimation: Derivation and evaluation of alpha-HMM. : Proceedings of the International Joint Conference on Neural Networks [5596959] https://doi.org/10.1109/IJCNN.2010.5596959

    Alpha-EM gives fast hidden Markov model estimation : Derivation and evaluation of alpha-HMM. / Matsuyama, Yasuo; Hayashi, Ryunosuke.

    Proceedings of the International Joint Conference on Neural Networks. 2010. 5596959.

    研究成果: Conference contribution

    Matsuyama, Y & Hayashi, R 2010, Alpha-EM gives fast hidden Markov model estimation: Derivation and evaluation of alpha-HMM. : Proceedings of the International Joint Conference on Neural Networks., 5596959, 2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010, Barcelona, 10/7/18. https://doi.org/10.1109/IJCNN.2010.5596959
    Matsuyama Y, Hayashi R. Alpha-EM gives fast hidden Markov model estimation: Derivation and evaluation of alpha-HMM. : Proceedings of the International Joint Conference on Neural Networks. 2010. 5596959 https://doi.org/10.1109/IJCNN.2010.5596959
    Matsuyama, Yasuo ; Hayashi, Ryunosuke. / Alpha-EM gives fast hidden Markov model estimation : Derivation and evaluation of alpha-HMM. Proceedings of the International Joint Conference on Neural Networks. 2010.
    @inproceedings{34a4c4272a914928a2891fcaffbda0ac,
    title = "Alpha-EM gives fast hidden Markov model estimation: Derivation and evaluation of alpha-HMM",
    abstract = "A fast learning algorithm for Hidden Markov Models is derived starting from convex divergence optimization. This method utilizes the alpha-logarithm as a surrogate function for the traditional logarithm to process the likelihood ratio. This enables the utilization of a stronger curvature than the logarithm. This paper's method includes the ordinary Baum-Welch re-estimation algorithm as a proper subset. The presented algorithm shows fast learning by utilizing time-shifted information during the progress of iterations. The computational complexity of this algorithm, which directly affects the CPU time, remains almost the same as the logarithmic one since only stored results are utilized for the speedup. Software implementation and speed are examined in the test data. The results showed that the presented method is creditable.",
    author = "Yasuo Matsuyama and Ryunosuke Hayashi",
    year = "2010",
    doi = "10.1109/IJCNN.2010.5596959",
    language = "English",
    isbn = "9781424469178",
    booktitle = "Proceedings of the International Joint Conference on Neural Networks",

    }

    TY - GEN

    T1 - Alpha-EM gives fast hidden Markov model estimation

    T2 - Derivation and evaluation of alpha-HMM

    AU - Matsuyama, Yasuo

    AU - Hayashi, Ryunosuke

    PY - 2010

    Y1 - 2010

    N2 - A fast learning algorithm for Hidden Markov Models is derived starting from convex divergence optimization. This method utilizes the alpha-logarithm as a surrogate function for the traditional logarithm to process the likelihood ratio. This enables the utilization of a stronger curvature than the logarithm. This paper's method includes the ordinary Baum-Welch re-estimation algorithm as a proper subset. The presented algorithm shows fast learning by utilizing time-shifted information during the progress of iterations. The computational complexity of this algorithm, which directly affects the CPU time, remains almost the same as the logarithmic one since only stored results are utilized for the speedup. Software implementation and speed are examined in the test data. The results showed that the presented method is creditable.

    AB - A fast learning algorithm for Hidden Markov Models is derived starting from convex divergence optimization. This method utilizes the alpha-logarithm as a surrogate function for the traditional logarithm to process the likelihood ratio. This enables the utilization of a stronger curvature than the logarithm. This paper's method includes the ordinary Baum-Welch re-estimation algorithm as a proper subset. The presented algorithm shows fast learning by utilizing time-shifted information during the progress of iterations. The computational complexity of this algorithm, which directly affects the CPU time, remains almost the same as the logarithmic one since only stored results are utilized for the speedup. Software implementation and speed are examined in the test data. The results showed that the presented method is creditable.

    UR - http://www.scopus.com/inward/record.url?scp=79959437221&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=79959437221&partnerID=8YFLogxK

    U2 - 10.1109/IJCNN.2010.5596959

    DO - 10.1109/IJCNN.2010.5596959

    M3 - Conference contribution

    AN - SCOPUS:79959437221

    SN - 9781424469178

    BT - Proceedings of the International Joint Conference on Neural Networks

    ER -