### Abstract

Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.

Original language | English |
---|---|

Title of host publication | Proceedings of the International Joint Conference on Neural Networks |

Pages | 1883-1888 |

Number of pages | 6 |

Volume | 2 |

Publication status | Published - 2002 |

Event | 2002 International Joint Conference on Neural Networks (IJCNN '02) - Honolulu, HI Duration: 2002 May 12 → 2002 May 17 |

### Other

Other | 2002 International Joint Conference on Neural Networks (IJCNN '02) |
---|---|

City | Honolulu, HI |

Period | 02/5/12 → 02/5/17 |

### Fingerprint

### ASJC Scopus subject areas

- Software

### Cite this

*Proceedings of the International Joint Conference on Neural Networks*(Vol. 2, pp. 1883-1888)

**Optimization transfer for computational learning : A hierarchy from f-ICA and alpha-EM to their offsprings.** / Matsuyama, Yasuo; Imahara, Shuichiro; Katsumata, Naoto.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Proceedings of the International Joint Conference on Neural Networks.*vol. 2, pp. 1883-1888, 2002 International Joint Conference on Neural Networks (IJCNN '02), Honolulu, HI, 02/5/12.

}

TY - GEN

T1 - Optimization transfer for computational learning

T2 - A hierarchy from f-ICA and alpha-EM to their offsprings

AU - Matsuyama, Yasuo

AU - Imahara, Shuichiro

AU - Katsumata, Naoto

PY - 2002

Y1 - 2002

N2 - Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.

AB - Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.

UR - http://www.scopus.com/inward/record.url?scp=0036088054&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0036088054&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0036088054

VL - 2

SP - 1883

EP - 1888

BT - Proceedings of the International Joint Conference on Neural Networks

ER -