抄録
The implementation of weight initialization is directly related to the convergence of learning algorithms. In this paper we made a case study on the famous Mackey-Glass time series problem in order to try to find some relations between weight initialization of neural networks and pruning algorithms. The pruning algorithm used in simulations is Laplace regularizer method, that is, the backpropagation algorithm with Laplace regularizer added to the criterion function. Simulation results show that different kinds of initialization weight matrices display almost the same generalization ability when using the pruning algorithm, at least for the Mackey-Glass time series.
本文言語 | English |
---|---|
ページ | 1750-1755 |
ページ数 | 6 |
出版ステータス | Published - 2001 1 1 |
外部発表 | はい |
イベント | International Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States 継続期間: 2001 7 15 → 2001 7 19 |
Conference
Conference | International Joint Conference on Neural Networks (IJCNN'01) |
---|---|
Country | United States |
City | Washington, DC |
Period | 01/7/15 → 01/7/19 |
ASJC Scopus subject areas
- Software
- Artificial Intelligence