Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series

W. Wan, K. Hirasawa, J. Hu, J. Murata

Research output: Contribution to conferencePaper

4 Citations (Scopus)

Abstract

The implementation of weight initialization is directly related to the convergence of learning algorithms. In this paper we made a case study on the famous Mackey-Glass time series problem in order to try to find some relations between weight initialization of neural networks and pruning algorithms. The pruning algorithm used in simulations is Laplace regularizer method, that is, the backpropagation algorithm with Laplace regularizer added to the criterion function. Simulation results show that different kinds of initialization weight matrices display almost the same generalization ability when using the pruning algorithm, at least for the Mackey-Glass time series.

Original languageEnglish
Pages1750-1755
Number of pages6
Publication statusPublished - 2001 Jan 1
Externally publishedYes
EventInternational Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States
Duration: 2001 Jul 152001 Jul 19

Conference

ConferenceInternational Joint Conference on Neural Networks (IJCNN'01)
CountryUnited States
CityWashington, DC
Period01/7/1501/7/19

Keywords

  • Mackey-Glass time series
  • Pruning methods
  • Weight initialization

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series'. Together they form a unique fingerprint.

  • Cite this

    Wan, W., Hirasawa, K., Hu, J., & Murata, J. (2001). Relation between weight initialization of neural networks and pruning algorithms case study on Mackey-Glass time series. 1750-1755. Paper presented at International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, United States.