Coordinated behavior of cooperative agents using deep reinforcement learning

Elhadji Amadou Oury Diallo, Ayumi Sugiyama, Toshiharu Sugawara

研究成果: Article

1 引用 (Scopus)

抜粋

In this work, we focus on an environment where multiple agents with complementary capabilities cooperate to generate non-conflicting joint actions that achieve a specific target. The central problem addressed is how several agents can collectively learn to coordinate their actions such that they complete a given task together without conflicts. However, sequential decision-making under uncertainty is one of the most challenging issues for intelligent cooperative systems. To address this, we propose a multi-agent concurrent framework where agents learn coordinated behaviors in order to divide their areas of responsibility. The proposed framework is an extension of some recent deep reinforcement learning algorithms such as DQN, double DQN, and dueling network architectures. Then, we investigate how the learned behaviors change according to the dynamics of the environment, reward scheme, and network structures. Next, we show how agents behave and choose their actions such that the resulting joint actions are optimal. We finally show that our method can lead to stable solutions in our specific environment.

元の言語English
ジャーナルNeurocomputing
DOI
出版物ステータスPublished - 2019 1 1

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

フィンガープリント Coordinated behavior of cooperative agents using deep reinforcement learning' の研究トピックを掘り下げます。これらはともに一意のフィンガープリントを構成します。

  • これを引用