Abstract
In recent years, 'Reinforcement Learning' which can acquire reflective and adaptive actions, is becoming the center of attention as a learning method for robotics control. However, there are many unsolved problems that have to be cleared in order to put the method into practical use. One of the problems is the handling of the state space and the action space. Many algorithms of existing reinforcement learning deal with discrete state space and action space. When the unit of search space is rough, a subtle control cannot be achieved (imperfect perception). On the contrary, when the unit of search space is too fine, searching space is enlarged accordingly and the stable convergence of learning cannot be obtained (curse of dimensionality). In this paper, we propose a nested actor/critic algorithm that can deal with the continuous state and action space. The method proposed in this paper inserts a child actor/critic into the actor part of parent actor/critic algorithm. We examined the proposed algorithm for a stable control problem in both simulation and prototype model of a joint-driven double inverted pendulum.
Original language | English |
---|---|
Title of host publication | ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 2610-2614 |
Number of pages | 5 |
Volume | 5 |
ISBN (Electronic) | 9810475241, 9789810475246 |
DOIs | |
Publication status | Published - 2002 |
Event | 9th International Conference on Neural Information Processing, ICONIP 2002 - Singapore, Singapore Duration: 2002 Nov 18 → 2002 Nov 22 |
Other
Other | 9th International Conference on Neural Information Processing, ICONIP 2002 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 02/11/18 → 02/11/22 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Signal Processing