抄録
This paper introduces an imitative model that enables a robot to acquire viewpoints of the self and others from its own sensory-motor experiences. This is important for recognizing and imitating actions generated from various directions. Existing methods require coordinate transformations input by human designers or complex learning modules to acquire a viewpoint. In the proposed model, several neurons dedicated to generated actions and viewpoints of the self and others are added to a dynamic nueral network model reffered as continuous time recurrent neural network (CTRNN). The training data are labeled with types of actions and viewpoints, and are linked to each internal state. We implemented this model in a robot and trained the model to perform actions of object manipulation. Representations of behavior and viewpoint were formed in the internal states of the CTRNN. In addition, we analyzed the initial values of the internal states that represent the viewpoint information. We confirmed the distinction of the observational perspective of other's actions self-organized in the space of the initial values. Combining the initial values of the internal states that describe the behavior and the viewpoint, the system can generate unlearned data.
本文言語 | English |
---|---|
ホスト出版物のタイトル | 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015 |
出版社 | Institute of Electrical and Electronics Engineers Inc. |
ページ | 326-331 |
ページ数 | 6 |
ISBN(印刷版) | 9781467393201 |
DOI | |
出版ステータス | Published - 2015 12月 2 |
イベント | 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015 - Providence, United States 継続期間: 2015 8月 13 → 2015 8月 16 |
Other
Other | 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015 |
---|---|
国/地域 | United States |
City | Providence |
Period | 15/8/13 → 15/8/16 |
ASJC Scopus subject areas
- 人工知能