Optimization transfer for computational learning

A hierarchy from f-ICA and alpha-EM to their offsprings

Yasuo Matsuyama, Shuichiro Imahara, Naoto Katsumata

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    3 Citations (Scopus)

    Abstract

    Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.

    Original languageEnglish
    Title of host publicationProceedings of the International Joint Conference on Neural Networks
    Pages1883-1888
    Number of pages6
    Volume2
    Publication statusPublished - 2002
    Event2002 International Joint Conference on Neural Networks (IJCNN '02) - Honolulu, HI
    Duration: 2002 May 122002 May 17

    Other

    Other2002 International Joint Conference on Neural Networks (IJCNN '02)
    CityHonolulu, HI
    Period02/5/1202/5/17

    Fingerprint

    Independent component analysis
    Vector quantization
    Probability density function
    Learning algorithms
    Brain
    Experiments

    ASJC Scopus subject areas

    • Software

    Cite this

    Matsuyama, Y., Imahara, S., & Katsumata, N. (2002). Optimization transfer for computational learning: A hierarchy from f-ICA and alpha-EM to their offsprings. In Proceedings of the International Joint Conference on Neural Networks (Vol. 2, pp. 1883-1888)

    Optimization transfer for computational learning : A hierarchy from f-ICA and alpha-EM to their offsprings. / Matsuyama, Yasuo; Imahara, Shuichiro; Katsumata, Naoto.

    Proceedings of the International Joint Conference on Neural Networks. Vol. 2 2002. p. 1883-1888.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Matsuyama, Y, Imahara, S & Katsumata, N 2002, Optimization transfer for computational learning: A hierarchy from f-ICA and alpha-EM to their offsprings. in Proceedings of the International Joint Conference on Neural Networks. vol. 2, pp. 1883-1888, 2002 International Joint Conference on Neural Networks (IJCNN '02), Honolulu, HI, 02/5/12.
    Matsuyama Y, Imahara S, Katsumata N. Optimization transfer for computational learning: A hierarchy from f-ICA and alpha-EM to their offsprings. In Proceedings of the International Joint Conference on Neural Networks. Vol. 2. 2002. p. 1883-1888
    Matsuyama, Yasuo ; Imahara, Shuichiro ; Katsumata, Naoto. / Optimization transfer for computational learning : A hierarchy from f-ICA and alpha-EM to their offsprings. Proceedings of the International Joint Conference on Neural Networks. Vol. 2 2002. pp. 1883-1888
    @inproceedings{eb2a293771e0488d9a3b71c5eadccba3,
    title = "Optimization transfer for computational learning: A hierarchy from f-ICA and alpha-EM to their offsprings",
    abstract = "Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.",
    author = "Yasuo Matsuyama and Shuichiro Imahara and Naoto Katsumata",
    year = "2002",
    language = "English",
    volume = "2",
    pages = "1883--1888",
    booktitle = "Proceedings of the International Joint Conference on Neural Networks",

    }

    TY - GEN

    T1 - Optimization transfer for computational learning

    T2 - A hierarchy from f-ICA and alpha-EM to their offsprings

    AU - Matsuyama, Yasuo

    AU - Imahara, Shuichiro

    AU - Katsumata, Naoto

    PY - 2002

    Y1 - 2002

    N2 - Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.

    AB - Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.

    UR - http://www.scopus.com/inward/record.url?scp=0036088054&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0036088054&partnerID=8YFLogxK

    M3 - Conference contribution

    VL - 2

    SP - 1883

    EP - 1888

    BT - Proceedings of the International Joint Conference on Neural Networks

    ER -