### Abstract

The α-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the α-logarithm, while the traditional one uses the logarithm. The case of α = -1 corresponds to the log-EM algorithm. Since the speed of the α-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbook for the α-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.

Original language | English |
---|---|

Title of host publication | Proceedings of the International Joint Conference on Neural Networks |

Place of Publication | United States |

Publisher | IEEE |

Pages | 1368-1373 |

Number of pages | 6 |

Volume | 2 |

Publication status | Published - 1999 |

Event | International Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA Duration: 1999 Jul 10 → 1999 Jul 16 |

### Other

Other | International Joint Conference on Neural Networks (IJCNN'99) |
---|---|

City | Washington, DC, USA |

Period | 99/7/10 → 99/7/16 |

### Fingerprint

### ASJC Scopus subject areas

- Software

### Cite this

*Proceedings of the International Joint Conference on Neural Networks*(Vol. 2, pp. 1368-1373). United States: IEEE.

**α-EM learning and its cookbook : From mixture-of-expert neural networks to movie random field.** / Matsuyama, Yasuo; Ikeda, Takayuki; Tanaka, Tomoaki; Furukawa, Satoshi; Takeda, Naoki; Niimoto, Takeshi.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Proceedings of the International Joint Conference on Neural Networks.*vol. 2, IEEE, United States, pp. 1368-1373, International Joint Conference on Neural Networks (IJCNN'99), Washington, DC, USA, 99/7/10.

}

TY - GEN

T1 - α-EM learning and its cookbook

T2 - From mixture-of-expert neural networks to movie random field

AU - Matsuyama, Yasuo

AU - Ikeda, Takayuki

AU - Tanaka, Tomoaki

AU - Furukawa, Satoshi

AU - Takeda, Naoki

AU - Niimoto, Takeshi

PY - 1999

Y1 - 1999

N2 - The α-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the α-logarithm, while the traditional one uses the logarithm. The case of α = -1 corresponds to the log-EM algorithm. Since the speed of the α-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbook for the α-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.

AB - The α-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the α-logarithm, while the traditional one uses the logarithm. The case of α = -1 corresponds to the log-EM algorithm. Since the speed of the α-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbook for the α-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.

UR - http://www.scopus.com/inward/record.url?scp=0033313112&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0033313112&partnerID=8YFLogxK

M3 - Conference contribution

VL - 2

SP - 1368

EP - 1373

BT - Proceedings of the International Joint Conference on Neural Networks

PB - IEEE

CY - United States

ER -