TY - GEN
T1 - Visualization of topographical internal representation of learning robots
AU - Kuramoto, Shiori
AU - Sawada, Hideyuki
AU - Hartono, Pitoyo
N1 - Publisher Copyright:
© 2020 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/7
Y1 - 2020/7
N2 - The objective of this study is to understand the learned-strategy of neural network-controlled robots in relation to their physical learning environments by visualizing the internal layer of the neural network. During the past few years, neural network-controlled robots that are able to learn in physical environments are becoming more common. While they can autonomously acquire strategy without human supervisions, it is becoming difficult to understand their strategy, especially when the robots, their environments and their tasks are complicated. In the critical fields that involve human safety, as in self-driving vehicles or medical robots, it is important for human to understand the strategies of the robots. In this preliminary study, we propose a hierarchical neural network with a two-dimensional topographical internal representation for training robots in physical environments. The 2D representation can then be visualized and analyzed to allow us to intuitively understand the input-output strategy of the robots in the context of their learning environments. In this paper, we explain about the learning dynamics of the neural network and the visual analysis of some physical experiments.
AB - The objective of this study is to understand the learned-strategy of neural network-controlled robots in relation to their physical learning environments by visualizing the internal layer of the neural network. During the past few years, neural network-controlled robots that are able to learn in physical environments are becoming more common. While they can autonomously acquire strategy without human supervisions, it is becoming difficult to understand their strategy, especially when the robots, their environments and their tasks are complicated. In the critical fields that involve human safety, as in self-driving vehicles or medical robots, it is important for human to understand the strategies of the robots. In this preliminary study, we propose a hierarchical neural network with a two-dimensional topographical internal representation for training robots in physical environments. The 2D representation can then be visualized and analyzed to allow us to intuitively understand the input-output strategy of the robots in the context of their learning environments. In this paper, we explain about the learning dynamics of the neural network and the visual analysis of some physical experiments.
KW - autonomous robots
KW - explainable AI
KW - hierarchical neural networks
KW - reinforcement learning
KW - self-organizing map
UR - http://www.scopus.com/inward/record.url?scp=85093872646&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85093872646&partnerID=8YFLogxK
U2 - 10.1109/IJCNN48605.2020.9206675
DO - 10.1109/IJCNN48605.2020.9206675
M3 - Conference contribution
AN - SCOPUS:85093872646
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2020 International Joint Conference on Neural Networks, IJCNN 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 International Joint Conference on Neural Networks, IJCNN 2020
Y2 - 19 July 2020 through 24 July 2020
ER -