抄録
In recent years, 'Reinforcement Learning' which can acquire reflective and adaptive actions, is becoming the center of attention as a learning method for robotics control. However, there are many unsolved problems that have to be cleared in order to put the method into practical use. One of the problems is the handling of the state space and the action space. Many algorithms of existing reinforcement learning deal with discrete state space and action space. When the unit of search space is rough, a subtle control cannot be achieved (imperfect perception). On the contrary, when the unit of search space is too fine, searching space is enlarged accordingly and the stable convergence of learning cannot be obtained (curse of dimensionality). In this paper, we propose a nested actor/critic algorithm that can deal with the continuous state and action space. The method proposed in this paper inserts a child actor/critic into the actor part of parent actor/critic algorithm. We examined the proposed algorithm for a stable control problem in both simulation and prototype model of a joint-driven double inverted pendulum.
本文言語 | English |
---|---|
ホスト出版物のタイトル | ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age |
出版社 | Institute of Electrical and Electronics Engineers Inc. |
ページ | 2610-2614 |
ページ数 | 5 |
巻 | 5 |
ISBN(電子版) | 9810475241, 9789810475246 |
DOI | |
出版ステータス | Published - 2002 |
イベント | 9th International Conference on Neural Information Processing, ICONIP 2002 - Singapore, Singapore 継続期間: 2002 11月 18 → 2002 11月 22 |
Other
Other | 9th International Conference on Neural Information Processing, ICONIP 2002 |
---|---|
国/地域 | Singapore |
City | Singapore |
Period | 02/11/18 → 02/11/22 |
ASJC Scopus subject areas
- コンピュータ ネットワークおよび通信
- 情報システム
- 信号処理