Fast α-weighted EM learning for neural networks of module mixtures

Yasuo Matsuyama, Satoshi Furukawa, Naoki Takeda, Takayuki Ikeda

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    6 Citations (Scopus)

    Abstract

    A class of extended logarithms is used to derive α-weighted EM (α-weighted Expectation and Maximization) algorithms. These extended EM algorithms (WEM's, α-EM's) have been anticipated to outperform the traditional (logarithmic) EM algorithm on the speed. The traditional approach falls into a special case of the new WEM. In this paper, general theoretical discussions are given first. Then, clear-cut evidences that show faster convergence than the ordinary EM approach are given on the case of mixture-of-expert neural networks. This process takes three steps. The first step is to show concrete algorithms. Then, the convergence is theoretically checked. Thirdly, experiments on the mixture-of-expert learning are tried to show the superiority of the WEM. Besides the supervised learning, unsupervised case on a Gaussian mixture is also examined. Faster convergence of the WEM is observed again.

    Original languageEnglish
    Title of host publicationIEEE International Conference on Neural Networks - Conference Proceedings
    Editors Anon
    Place of PublicationPiscataway, NJ, United States
    PublisherIEEE
    Pages2306-2311
    Number of pages6
    Volume3
    Publication statusPublished - 1998
    EventProceedings of the 1998 IEEE International Joint Conference on Neural Networks. Part 1 (of 3) - Anchorage, AK, USA
    Duration: 1998 May 41998 May 9

    Other

    OtherProceedings of the 1998 IEEE International Joint Conference on Neural Networks. Part 1 (of 3)
    CityAnchorage, AK, USA
    Period98/5/498/5/9

    Fingerprint

    Neural networks
    Supervised learning
    Concretes
    Experiments

    ASJC Scopus subject areas

    • Software

    Cite this

    Matsuyama, Y., Furukawa, S., Takeda, N., & Ikeda, T. (1998). Fast α-weighted EM learning for neural networks of module mixtures. In Anon (Ed.), IEEE International Conference on Neural Networks - Conference Proceedings (Vol. 3, pp. 2306-2311). Piscataway, NJ, United States: IEEE.

    Fast α-weighted EM learning for neural networks of module mixtures. / Matsuyama, Yasuo; Furukawa, Satoshi; Takeda, Naoki; Ikeda, Takayuki.

    IEEE International Conference on Neural Networks - Conference Proceedings. ed. / Anon. Vol. 3 Piscataway, NJ, United States : IEEE, 1998. p. 2306-2311.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Matsuyama, Y, Furukawa, S, Takeda, N & Ikeda, T 1998, Fast α-weighted EM learning for neural networks of module mixtures. in Anon (ed.), IEEE International Conference on Neural Networks - Conference Proceedings. vol. 3, IEEE, Piscataway, NJ, United States, pp. 2306-2311, Proceedings of the 1998 IEEE International Joint Conference on Neural Networks. Part 1 (of 3), Anchorage, AK, USA, 98/5/4.
    Matsuyama Y, Furukawa S, Takeda N, Ikeda T. Fast α-weighted EM learning for neural networks of module mixtures. In Anon, editor, IEEE International Conference on Neural Networks - Conference Proceedings. Vol. 3. Piscataway, NJ, United States: IEEE. 1998. p. 2306-2311
    Matsuyama, Yasuo ; Furukawa, Satoshi ; Takeda, Naoki ; Ikeda, Takayuki. / Fast α-weighted EM learning for neural networks of module mixtures. IEEE International Conference on Neural Networks - Conference Proceedings. editor / Anon. Vol. 3 Piscataway, NJ, United States : IEEE, 1998. pp. 2306-2311
    @inproceedings{153319a9696e47bab4eb19700354c440,
    title = "Fast α-weighted EM learning for neural networks of module mixtures",
    abstract = "A class of extended logarithms is used to derive α-weighted EM (α-weighted Expectation and Maximization) algorithms. These extended EM algorithms (WEM's, α-EM's) have been anticipated to outperform the traditional (logarithmic) EM algorithm on the speed. The traditional approach falls into a special case of the new WEM. In this paper, general theoretical discussions are given first. Then, clear-cut evidences that show faster convergence than the ordinary EM approach are given on the case of mixture-of-expert neural networks. This process takes three steps. The first step is to show concrete algorithms. Then, the convergence is theoretically checked. Thirdly, experiments on the mixture-of-expert learning are tried to show the superiority of the WEM. Besides the supervised learning, unsupervised case on a Gaussian mixture is also examined. Faster convergence of the WEM is observed again.",
    author = "Yasuo Matsuyama and Satoshi Furukawa and Naoki Takeda and Takayuki Ikeda",
    year = "1998",
    language = "English",
    volume = "3",
    pages = "2306--2311",
    editor = "Anon",
    booktitle = "IEEE International Conference on Neural Networks - Conference Proceedings",
    publisher = "IEEE",

    }

    TY - GEN

    T1 - Fast α-weighted EM learning for neural networks of module mixtures

    AU - Matsuyama, Yasuo

    AU - Furukawa, Satoshi

    AU - Takeda, Naoki

    AU - Ikeda, Takayuki

    PY - 1998

    Y1 - 1998

    N2 - A class of extended logarithms is used to derive α-weighted EM (α-weighted Expectation and Maximization) algorithms. These extended EM algorithms (WEM's, α-EM's) have been anticipated to outperform the traditional (logarithmic) EM algorithm on the speed. The traditional approach falls into a special case of the new WEM. In this paper, general theoretical discussions are given first. Then, clear-cut evidences that show faster convergence than the ordinary EM approach are given on the case of mixture-of-expert neural networks. This process takes three steps. The first step is to show concrete algorithms. Then, the convergence is theoretically checked. Thirdly, experiments on the mixture-of-expert learning are tried to show the superiority of the WEM. Besides the supervised learning, unsupervised case on a Gaussian mixture is also examined. Faster convergence of the WEM is observed again.

    AB - A class of extended logarithms is used to derive α-weighted EM (α-weighted Expectation and Maximization) algorithms. These extended EM algorithms (WEM's, α-EM's) have been anticipated to outperform the traditional (logarithmic) EM algorithm on the speed. The traditional approach falls into a special case of the new WEM. In this paper, general theoretical discussions are given first. Then, clear-cut evidences that show faster convergence than the ordinary EM approach are given on the case of mixture-of-expert neural networks. This process takes three steps. The first step is to show concrete algorithms. Then, the convergence is theoretically checked. Thirdly, experiments on the mixture-of-expert learning are tried to show the superiority of the WEM. Besides the supervised learning, unsupervised case on a Gaussian mixture is also examined. Faster convergence of the WEM is observed again.

    UR - http://www.scopus.com/inward/record.url?scp=0031625481&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0031625481&partnerID=8YFLogxK

    M3 - Conference contribution

    AN - SCOPUS:0031625481

    VL - 3

    SP - 2306

    EP - 2311

    BT - IEEE International Conference on Neural Networks - Conference Proceedings

    A2 - Anon, null

    PB - IEEE

    CY - Piscataway, NJ, United States

    ER -