Universal Learning Networks with varying parameters

Kotaro Hirasawa, Jinglu Hu, Junichi Murata, Chunzhi Jin, Hironobu Etoh, Hironobu Katagiri

Research output: Contribution to conferencePaper

1 Citation (Scopus)

Abstract

Universal Learning Network (ULN) which is a super-set of supervised learning networks has been already proposed. Parameters in ULN are trained in order to optimize a criterion function as conventional neural networks, and after training they are used as constant parameters. In this paper, a new method to alter the parameters depending on the network flows is presented to enhance representation abilities of networks. In the proposed method, there exists two kinds of networks, the first one is a basic network which includes varying parameters and the other one is a network which calculates the optimal varying parameters depending on the network flows of the basic network. It is also proposed in this paper that any type of networks such as fuzzy inference networks, radial basis function networks and neural networks can be used for the basic and parameter calculation networks. From simulations where parameters in a neural network are altered by a fuzzy inference networks, it is shown that the networks with the same number of varying parameters have higher representation abilities than the conventional networks.

Original languageEnglish
Pages1302-1307
Number of pages6
Publication statusPublished - 1999 Dec 1
EventInternational Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA
Duration: 1999 Jul 101999 Jul 16

Other

OtherInternational Joint Conference on Neural Networks (IJCNN'99)
CityWashington, DC, USA
Period99/7/1099/7/16

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Universal Learning Networks with varying parameters'. Together they form a unique fingerprint.

  • Cite this

    Hirasawa, K., Hu, J., Murata, J., Jin, C., Etoh, H., & Katagiri, H. (1999). Universal Learning Networks with varying parameters. 1302-1307. Paper presented at International Joint Conference on Neural Networks (IJCNN'99), Washington, DC, USA, .