Distributed Multi-Agent Deep Reinforcement Learning for Robust Coordination against Noise

Yoshinari Motokawa, Toshiharu Sugawara

研究成果: Conference contribution

抄録

In multi-agent systems, noise reduction techniques are considerable for improving the overall system reliability as agents are required to rely on limited environmental information to develop cooperative and coordinated behaviors with the surrounding agents. However, previous studies have often applied centralized noise reduction methods to build robust and versatile coordination in noisy multi-agent environments, while distributed and decentralized autonomous agents are more plausible for real-world application. In this paper, we introduce a distributed attentional actor architecture model for a multi-agent system (DA3-X), using which we demonstrate that agents with DA3-X can selectively learn the noisy environment and behave cooperatively. We experimentally evaluate the effectiveness of DA3-X by comparing learning methods with and without DA3-X and show that agents with DA3-X can achieve better performance than baseline agents. Furthermore, we visualize heatmaps of attentional weights from the DA3-X to analyze how the decision-making process and coordinated behavior are influenced by noise.

本文言語English
ホスト出版物のタイトル2022 International Joint Conference on Neural Networks, IJCNN 2022 - Proceedings
出版社Institute of Electrical and Electronics Engineers Inc.
ISBN(電子版)9781728186719
DOI
出版ステータスPublished - 2022
イベント2022 International Joint Conference on Neural Networks, IJCNN 2022 - Padua, Italy
継続期間: 2022 7月 182022 7月 23

出版物シリーズ

名前Proceedings of the International Joint Conference on Neural Networks
2022-July

Conference

Conference2022 International Joint Conference on Neural Networks, IJCNN 2022
国/地域Italy
CityPadua
Period22/7/1822/7/23

ASJC Scopus subject areas

  • ソフトウェア
  • 人工知能

フィンガープリント

「Distributed Multi-Agent Deep Reinforcement Learning for Robust Coordination against Noise」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル