Weighted EM algorithm and block monitoring

Yasuo Matsuyama

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    2 Citations (Scopus)

    Abstract

    The expectation and maximization algorithm (EM algorithm) is generalized so that the learning proceeds according to adjustable weights in terms of probability measures. The presented method, the weighted EM algorithm, or the α-EM algorithm includes the existing EM algorithm as a special case. It is further found that this learning structure can work systolically. It is also possible to add monitors to interact with lower systolic subsystems. This is made possible by attaching building blocks of the weighted (or plain) EM learning. Derivation of the whole algorithm is based on generalized divergences. In addition to the discussions on the learning, extensions of basic statistical properties such as Fisher's efficient score, his measure of information and Cramer-Rao's inequality are given. These appear in update equations of the generalized expectation learning. Experiments show that the presented generalized version contains cases that outperform traditional learning methods.

    Original languageEnglish
    Title of host publicationIEEE International Conference on Neural Networks - Conference Proceedings
    Place of PublicationPiscataway, NJ, United States
    PublisherIEEE
    Pages1936-1941
    Number of pages6
    Volume3
    Publication statusPublished - 1997
    EventProceedings of the 1997 IEEE International Conference on Neural Networks. Part 4 (of 4) - Houston, TX, USA
    Duration: 1997 Jun 91997 Jun 12

    Other

    OtherProceedings of the 1997 IEEE International Conference on Neural Networks. Part 4 (of 4)
    CityHouston, TX, USA
    Period97/6/997/6/12

    Fingerprint

    Monitoring
    Cramer-Rao bounds
    Experiments

    ASJC Scopus subject areas

    • Software
    • Control and Systems Engineering
    • Artificial Intelligence

    Cite this

    Matsuyama, Y. (1997). Weighted EM algorithm and block monitoring. In IEEE International Conference on Neural Networks - Conference Proceedings (Vol. 3, pp. 1936-1941). Piscataway, NJ, United States: IEEE.

    Weighted EM algorithm and block monitoring. / Matsuyama, Yasuo.

    IEEE International Conference on Neural Networks - Conference Proceedings. Vol. 3 Piscataway, NJ, United States : IEEE, 1997. p. 1936-1941.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Matsuyama, Y 1997, Weighted EM algorithm and block monitoring. in IEEE International Conference on Neural Networks - Conference Proceedings. vol. 3, IEEE, Piscataway, NJ, United States, pp. 1936-1941, Proceedings of the 1997 IEEE International Conference on Neural Networks. Part 4 (of 4), Houston, TX, USA, 97/6/9.
    Matsuyama Y. Weighted EM algorithm and block monitoring. In IEEE International Conference on Neural Networks - Conference Proceedings. Vol. 3. Piscataway, NJ, United States: IEEE. 1997. p. 1936-1941
    Matsuyama, Yasuo. / Weighted EM algorithm and block monitoring. IEEE International Conference on Neural Networks - Conference Proceedings. Vol. 3 Piscataway, NJ, United States : IEEE, 1997. pp. 1936-1941
    @inproceedings{cf18df36b825476f9a125840ee87e494,
    title = "Weighted EM algorithm and block monitoring",
    abstract = "The expectation and maximization algorithm (EM algorithm) is generalized so that the learning proceeds according to adjustable weights in terms of probability measures. The presented method, the weighted EM algorithm, or the α-EM algorithm includes the existing EM algorithm as a special case. It is further found that this learning structure can work systolically. It is also possible to add monitors to interact with lower systolic subsystems. This is made possible by attaching building blocks of the weighted (or plain) EM learning. Derivation of the whole algorithm is based on generalized divergences. In addition to the discussions on the learning, extensions of basic statistical properties such as Fisher's efficient score, his measure of information and Cramer-Rao's inequality are given. These appear in update equations of the generalized expectation learning. Experiments show that the presented generalized version contains cases that outperform traditional learning methods.",
    author = "Yasuo Matsuyama",
    year = "1997",
    language = "English",
    volume = "3",
    pages = "1936--1941",
    booktitle = "IEEE International Conference on Neural Networks - Conference Proceedings",
    publisher = "IEEE",

    }

    TY - GEN

    T1 - Weighted EM algorithm and block monitoring

    AU - Matsuyama, Yasuo

    PY - 1997

    Y1 - 1997

    N2 - The expectation and maximization algorithm (EM algorithm) is generalized so that the learning proceeds according to adjustable weights in terms of probability measures. The presented method, the weighted EM algorithm, or the α-EM algorithm includes the existing EM algorithm as a special case. It is further found that this learning structure can work systolically. It is also possible to add monitors to interact with lower systolic subsystems. This is made possible by attaching building blocks of the weighted (or plain) EM learning. Derivation of the whole algorithm is based on generalized divergences. In addition to the discussions on the learning, extensions of basic statistical properties such as Fisher's efficient score, his measure of information and Cramer-Rao's inequality are given. These appear in update equations of the generalized expectation learning. Experiments show that the presented generalized version contains cases that outperform traditional learning methods.

    AB - The expectation and maximization algorithm (EM algorithm) is generalized so that the learning proceeds according to adjustable weights in terms of probability measures. The presented method, the weighted EM algorithm, or the α-EM algorithm includes the existing EM algorithm as a special case. It is further found that this learning structure can work systolically. It is also possible to add monitors to interact with lower systolic subsystems. This is made possible by attaching building blocks of the weighted (or plain) EM learning. Derivation of the whole algorithm is based on generalized divergences. In addition to the discussions on the learning, extensions of basic statistical properties such as Fisher's efficient score, his measure of information and Cramer-Rao's inequality are given. These appear in update equations of the generalized expectation learning. Experiments show that the presented generalized version contains cases that outperform traditional learning methods.

    UR - http://www.scopus.com/inward/record.url?scp=0030677958&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0030677958&partnerID=8YFLogxK

    M3 - Conference contribution

    AN - SCOPUS:0030677958

    VL - 3

    SP - 1936

    EP - 1941

    BT - IEEE International Conference on Neural Networks - Conference Proceedings

    PB - IEEE

    CY - Piscataway, NJ, United States

    ER -