Abstract
Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.
Original language | English |
---|---|
Title of host publication | Proceedings of the International Joint Conference on Neural Networks |
Pages | 1883-1888 |
Number of pages | 6 |
Volume | 2 |
Publication status | Published - 2002 |
Event | 2002 International Joint Conference on Neural Networks (IJCNN '02) - Honolulu, HI Duration: 2002 May 12 → 2002 May 17 |
Other
Other | 2002 International Joint Conference on Neural Networks (IJCNN '02) |
---|---|
City | Honolulu, HI |
Period | 02/5/12 → 02/5/17 |
ASJC Scopus subject areas
- Software