### Abstract

The expectation and maximization algorithm (EM algorithm) is generalized so that the learning proceeds according to adjustable weights in terms of probability measures. The presented method, the weighted EM algorithm, or the α-EM algorithm includes the existing EM algorithm as a special case. It is further found that this learning structure can work systolically. It is also possible to add monitors to interact with lower systolic subsystems. This is made possible by attaching building blocks of the weighted (or plain) EM learning. Derivation of the whole algorithm is based on generalized divergences. In addition to the discussions on the learning, extensions of basic statistical properties such as Fisher's efficient score, his measure of information and Cramer-Rao's inequality are given. These appear in update equations of the generalized expectation learning. Experiments show that the presented generalized version contains cases that outperform traditional learning methods.

Original language | English |
---|---|

Title of host publication | IEEE International Conference on Neural Networks - Conference Proceedings |

Place of Publication | Piscataway, NJ, United States |

Publisher | IEEE |

Pages | 1936-1941 |

Number of pages | 6 |

Volume | 3 |

Publication status | Published - 1997 |

Event | Proceedings of the 1997 IEEE International Conference on Neural Networks. Part 4 (of 4) - Houston, TX, USA Duration: 1997 Jun 9 → 1997 Jun 12 |

### Other

Other | Proceedings of the 1997 IEEE International Conference on Neural Networks. Part 4 (of 4) |
---|---|

City | Houston, TX, USA |

Period | 97/6/9 → 97/6/12 |

### Fingerprint

### ASJC Scopus subject areas

- Software
- Control and Systems Engineering
- Artificial Intelligence

### Cite this

*IEEE International Conference on Neural Networks - Conference Proceedings*(Vol. 3, pp. 1936-1941). Piscataway, NJ, United States: IEEE.

**Weighted EM algorithm and block monitoring.** / Matsuyama, Yasuo.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*IEEE International Conference on Neural Networks - Conference Proceedings.*vol. 3, IEEE, Piscataway, NJ, United States, pp. 1936-1941, Proceedings of the 1997 IEEE International Conference on Neural Networks. Part 4 (of 4), Houston, TX, USA, 97/6/9.

}

TY - GEN

T1 - Weighted EM algorithm and block monitoring

AU - Matsuyama, Yasuo

PY - 1997

Y1 - 1997

N2 - The expectation and maximization algorithm (EM algorithm) is generalized so that the learning proceeds according to adjustable weights in terms of probability measures. The presented method, the weighted EM algorithm, or the α-EM algorithm includes the existing EM algorithm as a special case. It is further found that this learning structure can work systolically. It is also possible to add monitors to interact with lower systolic subsystems. This is made possible by attaching building blocks of the weighted (or plain) EM learning. Derivation of the whole algorithm is based on generalized divergences. In addition to the discussions on the learning, extensions of basic statistical properties such as Fisher's efficient score, his measure of information and Cramer-Rao's inequality are given. These appear in update equations of the generalized expectation learning. Experiments show that the presented generalized version contains cases that outperform traditional learning methods.

AB - The expectation and maximization algorithm (EM algorithm) is generalized so that the learning proceeds according to adjustable weights in terms of probability measures. The presented method, the weighted EM algorithm, or the α-EM algorithm includes the existing EM algorithm as a special case. It is further found that this learning structure can work systolically. It is also possible to add monitors to interact with lower systolic subsystems. This is made possible by attaching building blocks of the weighted (or plain) EM learning. Derivation of the whole algorithm is based on generalized divergences. In addition to the discussions on the learning, extensions of basic statistical properties such as Fisher's efficient score, his measure of information and Cramer-Rao's inequality are given. These appear in update equations of the generalized expectation learning. Experiments show that the presented generalized version contains cases that outperform traditional learning methods.

UR - http://www.scopus.com/inward/record.url?scp=0030677958&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0030677958&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0030677958

VL - 3

SP - 1936

EP - 1941

BT - IEEE International Conference on Neural Networks - Conference Proceedings

PB - IEEE

CY - Piscataway, NJ, United States

ER -