New method to prune the neural network

Weishui Wan, Kotaro Hirasawa, Jinglu Hu, Chunzhi Jin

研究成果: Paper査読

1 被引用数 (Scopus)

抄録

Using backpropagation algorithm (BP) to train neural networks is a widely adopted practice in both theory and practical applications. But its distributed weight representation, that is the weight matrix of final network after training by using BP are usually not sparsified, and prohibits its use in the rule discovery of inherent functional relations between the input and output data, so in this aspect some kinds of structure optimization are needed to improve its poor performance. In this paper with this in mind a new method to prune neural networks is proposed based on some statistical quantities of neural networks. Comparing with the other known pruning methods such as structural learning with forgetting (SLF) and RPROP algorithm, the proposed method can attain comparable or even better results over these methods without evident increase of the computational load. Detailed simulations using the Iris data sets exhibit our above assertion.

本文言語English
ページ449-454
ページ数6
出版ステータスPublished - 2000 1 1
外部発表はい
イベントInternational Joint Conference on Neural Networks (IJCNN'2000) - Como, Italy
継続期間: 2000 7 242000 7 27

Other

OtherInternational Joint Conference on Neural Networks (IJCNN'2000)
CityComo, Italy
Period00/7/2400/7/27

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

フィンガープリント 「New method to prune the neural network」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル