Multiplication units in feedforward neural networks and its training

Dazi Li, K. Hirasawa, Takayuki Furuzuki, J. Murata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

This paper proposes the application of neural networks with multiplication units to parity-N problem, mirror symmetry problem and a function approximation problem. It is clear that, higher-order terms in neural networks, such as sigma-pi unit, can improve the computational power of neural networks considerably. But how the real neurons do this is still unclear. We have used one multiplication unit to construct full higher-order terms of all the inputs, which was proved very efficient for parity-N problem. Our earlier work on applying multiplication units to other problems suffered from the drawback of gradient-based algorithm, such as backpropagation algorithms, for being easy to stuck at local minima due to the complexity of the network. In order to overcome this problem we consider a novel random search, RasID, for the training of neural networks with multiplication units, which does an intensified search where it is easy to find good solutions locally and a diversified search to escape from local minima under a pure random search scheme. The method shows its advantage on the training of neural networks with multiplication units.

Original languageEnglish
Title of host publicationICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages75-79
Number of pages5
Volume1
ISBN (Print)9810475241, 9789810475246
DOIs
Publication statusPublished - 2002
Externally publishedYes
Event9th International Conference on Neural Information Processing, ICONIP 2002 - Singapore, Singapore
Duration: 2002 Nov 182002 Nov 22

Other

Other9th International Conference on Neural Information Processing, ICONIP 2002
CountrySingapore
CitySingapore
Period02/11/1802/11/22

Fingerprint

Feedforward neural networks
Neural networks
Backpropagation algorithms
Neurons
Mirrors

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Li, D., Hirasawa, K., Furuzuki, T., & Murata, J. (2002). Multiplication units in feedforward neural networks and its training. In ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age (Vol. 1, pp. 75-79). [1202134] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICONIP.2002.1202134

Multiplication units in feedforward neural networks and its training. / Li, Dazi; Hirasawa, K.; Furuzuki, Takayuki; Murata, J.

ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age. Vol. 1 Institute of Electrical and Electronics Engineers Inc., 2002. p. 75-79 1202134.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Li, D, Hirasawa, K, Furuzuki, T & Murata, J 2002, Multiplication units in feedforward neural networks and its training. in ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age. vol. 1, 1202134, Institute of Electrical and Electronics Engineers Inc., pp. 75-79, 9th International Conference on Neural Information Processing, ICONIP 2002, Singapore, Singapore, 02/11/18. https://doi.org/10.1109/ICONIP.2002.1202134
Li D, Hirasawa K, Furuzuki T, Murata J. Multiplication units in feedforward neural networks and its training. In ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age. Vol. 1. Institute of Electrical and Electronics Engineers Inc. 2002. p. 75-79. 1202134 https://doi.org/10.1109/ICONIP.2002.1202134
Li, Dazi ; Hirasawa, K. ; Furuzuki, Takayuki ; Murata, J. / Multiplication units in feedforward neural networks and its training. ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age. Vol. 1 Institute of Electrical and Electronics Engineers Inc., 2002. pp. 75-79
@inproceedings{612d985d68b54045b565c7bf51d24df2,
title = "Multiplication units in feedforward neural networks and its training",
abstract = "This paper proposes the application of neural networks with multiplication units to parity-N problem, mirror symmetry problem and a function approximation problem. It is clear that, higher-order terms in neural networks, such as sigma-pi unit, can improve the computational power of neural networks considerably. But how the real neurons do this is still unclear. We have used one multiplication unit to construct full higher-order terms of all the inputs, which was proved very efficient for parity-N problem. Our earlier work on applying multiplication units to other problems suffered from the drawback of gradient-based algorithm, such as backpropagation algorithms, for being easy to stuck at local minima due to the complexity of the network. In order to overcome this problem we consider a novel random search, RasID, for the training of neural networks with multiplication units, which does an intensified search where it is easy to find good solutions locally and a diversified search to escape from local minima under a pure random search scheme. The method shows its advantage on the training of neural networks with multiplication units.",
author = "Dazi Li and K. Hirasawa and Takayuki Furuzuki and J. Murata",
year = "2002",
doi = "10.1109/ICONIP.2002.1202134",
language = "English",
isbn = "9810475241",
volume = "1",
pages = "75--79",
booktitle = "ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Multiplication units in feedforward neural networks and its training

AU - Li, Dazi

AU - Hirasawa, K.

AU - Furuzuki, Takayuki

AU - Murata, J.

PY - 2002

Y1 - 2002

N2 - This paper proposes the application of neural networks with multiplication units to parity-N problem, mirror symmetry problem and a function approximation problem. It is clear that, higher-order terms in neural networks, such as sigma-pi unit, can improve the computational power of neural networks considerably. But how the real neurons do this is still unclear. We have used one multiplication unit to construct full higher-order terms of all the inputs, which was proved very efficient for parity-N problem. Our earlier work on applying multiplication units to other problems suffered from the drawback of gradient-based algorithm, such as backpropagation algorithms, for being easy to stuck at local minima due to the complexity of the network. In order to overcome this problem we consider a novel random search, RasID, for the training of neural networks with multiplication units, which does an intensified search where it is easy to find good solutions locally and a diversified search to escape from local minima under a pure random search scheme. The method shows its advantage on the training of neural networks with multiplication units.

AB - This paper proposes the application of neural networks with multiplication units to parity-N problem, mirror symmetry problem and a function approximation problem. It is clear that, higher-order terms in neural networks, such as sigma-pi unit, can improve the computational power of neural networks considerably. But how the real neurons do this is still unclear. We have used one multiplication unit to construct full higher-order terms of all the inputs, which was proved very efficient for parity-N problem. Our earlier work on applying multiplication units to other problems suffered from the drawback of gradient-based algorithm, such as backpropagation algorithms, for being easy to stuck at local minima due to the complexity of the network. In order to overcome this problem we consider a novel random search, RasID, for the training of neural networks with multiplication units, which does an intensified search where it is easy to find good solutions locally and a diversified search to escape from local minima under a pure random search scheme. The method shows its advantage on the training of neural networks with multiplication units.

UR - http://www.scopus.com/inward/record.url?scp=84965025652&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84965025652&partnerID=8YFLogxK

U2 - 10.1109/ICONIP.2002.1202134

DO - 10.1109/ICONIP.2002.1202134

M3 - Conference contribution

SN - 9810475241

SN - 9789810475246

VL - 1

SP - 75

EP - 79

BT - ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age

PB - Institute of Electrical and Electronics Engineers Inc.

ER -