Abstract
A class of extended logarithms is used to derive α-weighted EM (α-weighted Expectation and Maximization) algorithms. These extended EM algorithms (WEM's, α-EM's) have been anticipated to outperform the traditional (logarithmic) EM algorithm on the speed. The traditional approach falls into a special case of the new WEM. In this paper, general theoretical discussions are given first. Then, clear-cut evidences that show faster convergence than the ordinary EM approach are given on the case of mixture-of-expert neural networks. This process takes three steps. The first step is to show concrete algorithms. Then, the convergence is theoretically checked. Thirdly, experiments on the mixture-of-expert learning are tried to show the superiority of the WEM. Besides the supervised learning, unsupervised case on a Gaussian mixture is also examined. Faster convergence of the WEM is observed again.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Neural Networks - Conference Proceedings |
Editors | Anon |
Place of Publication | Piscataway, NJ, United States |
Publisher | IEEE |
Pages | 2306-2311 |
Number of pages | 6 |
Volume | 3 |
Publication status | Published - 1998 |
Event | Proceedings of the 1998 IEEE International Joint Conference on Neural Networks. Part 1 (of 3) - Anchorage, AK, USA Duration: 1998 May 4 → 1998 May 9 |
Other
Other | Proceedings of the 1998 IEEE International Joint Conference on Neural Networks. Part 1 (of 3) |
---|---|
City | Anchorage, AK, USA |
Period | 98/5/4 → 98/5/9 |
ASJC Scopus subject areas
- Software