Improvement of generalization ability for identifying dynamic systems by using universal learning networks

S. Kim, K. Hirasawa, Takayuki Furuzuki

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

This paper studies how the generalization ability of models of dynamic systems can be improved by taking advantages of the second order derivatives of the outputs of networks with respect to the external inputs. The proposed method can be regarded as a direct implementation of the well-known regularization technique using the higher order derivatives of the Universal Learning Networks (ULNs). ULNs consist of a number of inter-connected nodes where the nodes may have any continuously differentiable nonlinear functions in them and each pair of nodes can be connected by multiple branches with arbitrary time delays. A generalized learning algorithm has been derived for the ULNs, in which both the first order derivatives (gradients) and the higher order derivatives are incorporated. First, the method for computing the second order derivatives of ULNs is discussed. Then a new method for implementing the regularization term is presented. Finally simulation studies on identification of a nonlinear dynamic system with noises are carried out to demonstrate the effectiveness of the proposed method. Simulation results show that the proposed method can improve the generalization ability of neural networks significantly especially in terms that (1) the robust network can be obtained even when the branches of trained ULNs are destructed, and (2) the obtained performance does not depend on the initial parameter values.

Original languageEnglish
Title of host publicationProceedings of the International Joint Conference on Neural Networks
Pages1203-1208
Number of pages6
Volume2
Publication statusPublished - 2001
Externally publishedYes
EventInternational Joint Conference on Neural Networks (IJCNN'01) - Washington, DC
Duration: 2001 Jul 152001 Jul 19

Other

OtherInternational Joint Conference on Neural Networks (IJCNN'01)
CityWashington, DC
Period01/7/1501/7/19

Fingerprint

Dynamical systems
Derivatives
Learning algorithms
Time delay
Neural networks

ASJC Scopus subject areas

  • Software

Cite this

Kim, S., Hirasawa, K., & Furuzuki, T. (2001). Improvement of generalization ability for identifying dynamic systems by using universal learning networks. In Proceedings of the International Joint Conference on Neural Networks (Vol. 2, pp. 1203-1208)

Improvement of generalization ability for identifying dynamic systems by using universal learning networks. / Kim, S.; Hirasawa, K.; Furuzuki, Takayuki.

Proceedings of the International Joint Conference on Neural Networks. Vol. 2 2001. p. 1203-1208.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kim, S, Hirasawa, K & Furuzuki, T 2001, Improvement of generalization ability for identifying dynamic systems by using universal learning networks. in Proceedings of the International Joint Conference on Neural Networks. vol. 2, pp. 1203-1208, International Joint Conference on Neural Networks (IJCNN'01), Washington, DC, 01/7/15.
Kim S, Hirasawa K, Furuzuki T. Improvement of generalization ability for identifying dynamic systems by using universal learning networks. In Proceedings of the International Joint Conference on Neural Networks. Vol. 2. 2001. p. 1203-1208
Kim, S. ; Hirasawa, K. ; Furuzuki, Takayuki. / Improvement of generalization ability for identifying dynamic systems by using universal learning networks. Proceedings of the International Joint Conference on Neural Networks. Vol. 2 2001. pp. 1203-1208
@inproceedings{5833071279c24c48b7c147fa43052c6d,
title = "Improvement of generalization ability for identifying dynamic systems by using universal learning networks",
abstract = "This paper studies how the generalization ability of models of dynamic systems can be improved by taking advantages of the second order derivatives of the outputs of networks with respect to the external inputs. The proposed method can be regarded as a direct implementation of the well-known regularization technique using the higher order derivatives of the Universal Learning Networks (ULNs). ULNs consist of a number of inter-connected nodes where the nodes may have any continuously differentiable nonlinear functions in them and each pair of nodes can be connected by multiple branches with arbitrary time delays. A generalized learning algorithm has been derived for the ULNs, in which both the first order derivatives (gradients) and the higher order derivatives are incorporated. First, the method for computing the second order derivatives of ULNs is discussed. Then a new method for implementing the regularization term is presented. Finally simulation studies on identification of a nonlinear dynamic system with noises are carried out to demonstrate the effectiveness of the proposed method. Simulation results show that the proposed method can improve the generalization ability of neural networks significantly especially in terms that (1) the robust network can be obtained even when the branches of trained ULNs are destructed, and (2) the obtained performance does not depend on the initial parameter values.",
author = "S. Kim and K. Hirasawa and Takayuki Furuzuki",
year = "2001",
language = "English",
volume = "2",
pages = "1203--1208",
booktitle = "Proceedings of the International Joint Conference on Neural Networks",

}

TY - GEN

T1 - Improvement of generalization ability for identifying dynamic systems by using universal learning networks

AU - Kim, S.

AU - Hirasawa, K.

AU - Furuzuki, Takayuki

PY - 2001

Y1 - 2001

N2 - This paper studies how the generalization ability of models of dynamic systems can be improved by taking advantages of the second order derivatives of the outputs of networks with respect to the external inputs. The proposed method can be regarded as a direct implementation of the well-known regularization technique using the higher order derivatives of the Universal Learning Networks (ULNs). ULNs consist of a number of inter-connected nodes where the nodes may have any continuously differentiable nonlinear functions in them and each pair of nodes can be connected by multiple branches with arbitrary time delays. A generalized learning algorithm has been derived for the ULNs, in which both the first order derivatives (gradients) and the higher order derivatives are incorporated. First, the method for computing the second order derivatives of ULNs is discussed. Then a new method for implementing the regularization term is presented. Finally simulation studies on identification of a nonlinear dynamic system with noises are carried out to demonstrate the effectiveness of the proposed method. Simulation results show that the proposed method can improve the generalization ability of neural networks significantly especially in terms that (1) the robust network can be obtained even when the branches of trained ULNs are destructed, and (2) the obtained performance does not depend on the initial parameter values.

AB - This paper studies how the generalization ability of models of dynamic systems can be improved by taking advantages of the second order derivatives of the outputs of networks with respect to the external inputs. The proposed method can be regarded as a direct implementation of the well-known regularization technique using the higher order derivatives of the Universal Learning Networks (ULNs). ULNs consist of a number of inter-connected nodes where the nodes may have any continuously differentiable nonlinear functions in them and each pair of nodes can be connected by multiple branches with arbitrary time delays. A generalized learning algorithm has been derived for the ULNs, in which both the first order derivatives (gradients) and the higher order derivatives are incorporated. First, the method for computing the second order derivatives of ULNs is discussed. Then a new method for implementing the regularization term is presented. Finally simulation studies on identification of a nonlinear dynamic system with noises are carried out to demonstrate the effectiveness of the proposed method. Simulation results show that the proposed method can improve the generalization ability of neural networks significantly especially in terms that (1) the robust network can be obtained even when the branches of trained ULNs are destructed, and (2) the obtained performance does not depend on the initial parameter values.

UR - http://www.scopus.com/inward/record.url?scp=0034878580&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0034878580&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0034878580

VL - 2

SP - 1203

EP - 1208

BT - Proceedings of the International Joint Conference on Neural Networks

ER -