Learning of real robot's inverse dynamics by a forward-propagation learning rule

Hiroki Mori, Yoshihiro Ohama, Naohiro Fukumura, Yoji Uno

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

A forward-propagation learning rule (FPL) has been proposed for a neural network (NN) to learn an inverse model of a controlled object. A feature of FPL is that the trajectory error propagates forward in NN and appropriate values of two learning parameters are required to be set. FPL has only been simulated to several kinds of controlled objects such as a two-link arm in a horizontal plane. In this work, we applied FPL to AIBO and showed the validity of FPL on a real controlled object. At first, we tested a learning experiment of an inverse dynamic of a two-link arm in a sagittal plane with viscosity and Coulomb friction by computer simulation. In this simulation, a low-pass filter (LPF) was applied to realized trajectories because coulomb friction vibrates them. From the simulation results, we found that the learning process is stable by some adequate sets of the learning parameters although it is more sensitive to the values of the parameters owing to friction and gravity terms. Finally, we tested applying FPL to motor control of AIBO's leg. The inverse dynamics model was acquired by FPL with only about 150 learning iterations. From these results, the validity of the FPL was confirmed by the real robot control experiments.

Original languageEnglish
Pages (from-to)38-48
Number of pages11
JournalElectrical Engineering in Japan (English translation of Denki Gakkai Ronbunshi)
Volume161
Issue number4
DOIs
Publication statusPublished - 2007 Dec 1
Externally publishedYes

Keywords

  • AIBO
  • Forward propagation
  • Inverse dynamics
  • Motor control
  • Multilayered neural network

ASJC Scopus subject areas

  • Energy Engineering and Power Technology
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Learning of real robot's inverse dynamics by a forward-propagation learning rule'. Together they form a unique fingerprint.

Cite this