Abstract
The α-EM algorithm is a proper extension of the traditional log-EM algorithm. This new algorithm is based on the α-logarithm, while the traditional one uses the logarithm. The case of α = -1 corresponds to the log-EM algorithm. Since the speed of the α-EM algorithm was reported for learning problems, this paper shows that closed-form E-steps can be obtained for a wide class of problems. There is a set of common techniques. That is, a cookbook for the α-EM algorithm is presented. The recipes include unsupervised neural networks, supervised neural networks for various gating, hidden Markov models and Markov random fields for moving object segmentation. Reasoning for the speedup is also given.
Original language | English |
---|---|
Title of host publication | Proceedings of the International Joint Conference on Neural Networks |
Place of Publication | United States |
Publisher | IEEE |
Pages | 1368-1373 |
Number of pages | 6 |
Volume | 2 |
Publication status | Published - 1999 |
Event | International Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA Duration: 1999 Jul 10 → 1999 Jul 16 |
Other
Other | International Joint Conference on Neural Networks (IJCNN'99) |
---|---|
City | Washington, DC, USA |
Period | 99/7/10 → 99/7/16 |
ASJC Scopus subject areas
- Software