α-EM learning and its cookbook

From mixture-of-expert neural networks to movie random field

Yasuo Matsuyama, Takayuki Ikeda, Tomoaki Tanaka, Satoshi Furukawa, Naoki Takeda, Takeshi Niimoto

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    2 Citations (Scopus)

    Abstract

    The α-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the α-logarithm, while the traditional one uses the logarithm. The case of α = -1 corresponds to the log-EM algorithm. Since the speed of the α-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbook for the α-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.

    Original languageEnglish
    Title of host publicationProceedings of the International Joint Conference on Neural Networks
    Place of PublicationUnited States
    PublisherIEEE
    Pages1368-1373
    Number of pages6
    Volume2
    Publication statusPublished - 1999
    EventInternational Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA
    Duration: 1999 Jul 101999 Jul 16

    Other

    OtherInternational Joint Conference on Neural Networks (IJCNN'99)
    CityWashington, DC, USA
    Period99/7/1099/7/16

    Fingerprint

    Neural networks
    Hidden Markov models

    ASJC Scopus subject areas

    • Software

    Cite this

    Matsuyama, Y., Ikeda, T., Tanaka, T., Furukawa, S., Takeda, N., & Niimoto, T. (1999). α-EM learning and its cookbook: From mixture-of-expert neural networks to movie random field. In Proceedings of the International Joint Conference on Neural Networks (Vol. 2, pp. 1368-1373). United States: IEEE.

    α-EM learning and its cookbook : From mixture-of-expert neural networks to movie random field. / Matsuyama, Yasuo; Ikeda, Takayuki; Tanaka, Tomoaki; Furukawa, Satoshi; Takeda, Naoki; Niimoto, Takeshi.

    Proceedings of the International Joint Conference on Neural Networks. Vol. 2 United States : IEEE, 1999. p. 1368-1373.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Matsuyama, Y, Ikeda, T, Tanaka, T, Furukawa, S, Takeda, N & Niimoto, T 1999, α-EM learning and its cookbook: From mixture-of-expert neural networks to movie random field. in Proceedings of the International Joint Conference on Neural Networks. vol. 2, IEEE, United States, pp. 1368-1373, International Joint Conference on Neural Networks (IJCNN'99), Washington, DC, USA, 99/7/10.
    Matsuyama Y, Ikeda T, Tanaka T, Furukawa S, Takeda N, Niimoto T. α-EM learning and its cookbook: From mixture-of-expert neural networks to movie random field. In Proceedings of the International Joint Conference on Neural Networks. Vol. 2. United States: IEEE. 1999. p. 1368-1373
    Matsuyama, Yasuo ; Ikeda, Takayuki ; Tanaka, Tomoaki ; Furukawa, Satoshi ; Takeda, Naoki ; Niimoto, Takeshi. / α-EM learning and its cookbook : From mixture-of-expert neural networks to movie random field. Proceedings of the International Joint Conference on Neural Networks. Vol. 2 United States : IEEE, 1999. pp. 1368-1373
    @inproceedings{ad34347e92c14324a8903645c50a6e65,
    title = "α-EM learning and its cookbook: From mixture-of-expert neural networks to movie random field",
    abstract = "The α-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the α-logarithm, while the traditional one uses the logarithm. The case of α = -1 corresponds to the log-EM algorithm. Since the speed of the α-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbook for the α-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.",
    author = "Yasuo Matsuyama and Takayuki Ikeda and Tomoaki Tanaka and Satoshi Furukawa and Naoki Takeda and Takeshi Niimoto",
    year = "1999",
    language = "English",
    volume = "2",
    pages = "1368--1373",
    booktitle = "Proceedings of the International Joint Conference on Neural Networks",
    publisher = "IEEE",

    }

    TY - GEN

    T1 - α-EM learning and its cookbook

    T2 - From mixture-of-expert neural networks to movie random field

    AU - Matsuyama, Yasuo

    AU - Ikeda, Takayuki

    AU - Tanaka, Tomoaki

    AU - Furukawa, Satoshi

    AU - Takeda, Naoki

    AU - Niimoto, Takeshi

    PY - 1999

    Y1 - 1999

    N2 - The α-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the α-logarithm, while the traditional one uses the logarithm. The case of α = -1 corresponds to the log-EM algorithm. Since the speed of the α-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbook for the α-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.

    AB - The α-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the α-logarithm, while the traditional one uses the logarithm. The case of α = -1 corresponds to the log-EM algorithm. Since the speed of the α-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbook for the α-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.

    UR - http://www.scopus.com/inward/record.url?scp=0033313112&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0033313112&partnerID=8YFLogxK

    M3 - Conference contribution

    VL - 2

    SP - 1368

    EP - 1373

    BT - Proceedings of the International Joint Conference on Neural Networks

    PB - IEEE

    CY - United States

    ER -