Stable Deep Reinforcement Learning Method by Predicting Uncertainty in Rewards as a Subtask

Kanata Suzuki, Tetsuya Ogata*

*この研究の対応する著者

研究成果: Conference contribution

抄録

In recent years, a variety of tasks have been accomplished by deep reinforcement learning (DRL). However, when applying DRL to tasks in a real-world environment, designing an appropriate reward is difficult. Rewards obtained via actual hardware sensors may include noise, misinterpretation, or failed observations. The learning instability caused by these unstable signals is a problem that remains to be solved in DRL. In this work, we propose an approach that extends existing DRL models by adding a subtask to directly estimate the variance contained in the reward signal. The model then takes the feature map learned by the subtask in a critic network and sends it to the actor network. This enables stable learning that is robust to the effects of potential noise. The results of experiments in the Atari game domain with unstable reward signals show that our method stabilizes training convergence. We also discuss the extensibility of the model by visualizing feature maps. This approach has the potential to make DRL more practical for use in noisy, real-world scenarios.

本文言語English
ホスト出版物のタイトルNeural Information Processing - 27th International Conference, ICONIP 2020, Proceedings
編集者Haiqin Yang, Kitsuchart Pasupa, Andrew Chi-Sing Leung, James T. Kwok, Jonathan H. Chan, Irwin King
出版社Springer Science and Business Media Deutschland GmbH
ページ651-662
ページ数12
ISBN(印刷版)9783030638320
DOI
出版ステータスPublished - 2020
イベント27th International Conference on Neural Information Processing, ICONIP 2020 - Bangkok, Thailand
継続期間: 2020 11 182020 11 22

出版物シリーズ

名前Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
12533 LNCS
ISSN(印刷版)0302-9743
ISSN(電子版)1611-3349

Conference

Conference27th International Conference on Neural Information Processing, ICONIP 2020
国/地域Thailand
CityBangkok
Period20/11/1820/11/22

ASJC Scopus subject areas

  • 理論的コンピュータサイエンス
  • コンピュータ サイエンス(全般)

フィンガープリント

「Stable Deep Reinforcement Learning Method by Predicting Uncertainty in Rewards as a Subtask」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル