Optimization transfer for computational learning: A hierarchy from f-ICA and alpha-EM to their offsprings

Yasuo Matsuyama*, Shuichiro Imahara, Naoto Katsumata

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    3 Citations (Scopus)

    Abstract

    Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.

    Original languageEnglish
    Title of host publicationProceedings of the International Joint Conference on Neural Networks
    Pages1883-1888
    Number of pages6
    Volume2
    Publication statusPublished - 2002
    Event2002 International Joint Conference on Neural Networks (IJCNN '02) - Honolulu, HI
    Duration: 2002 May 122002 May 17

    Other

    Other2002 International Joint Conference on Neural Networks (IJCNN '02)
    CityHonolulu, HI
    Period02/5/1202/5/17

    ASJC Scopus subject areas

    • Software

    Fingerprint

    Dive into the research topics of 'Optimization transfer for computational learning: A hierarchy from f-ICA and alpha-EM to their offsprings'. Together they form a unique fingerprint.

    Cite this