Acquisition of viewpoint transformation and action mappings via sequence to sequence imitative learning by deep neural networks

Ryoichi Nakajo, Shingo Murata, Hiroaki Arie, Tetsuya Ogata*

*この研究の対応する著者

研究成果: Article査読

1 被引用数 (Scopus)

抄録

We propose an imitative learning model that allows a robot to acquire positional relations between the demonstrator and the robot, and to transform observed actions into robotic actions. Providing robots with imitative capabilities allows us to teach novel actions to them without resorting to trial-and-error approaches. Existing methods for imitative robotic learning require mathematical formulations or conversion modules to translate positional relations between demonstrators and robots. The proposed model uses two neural networks, a convolutional autoencoder (CAE) and a multiple timescale recurrent neural network (MTRNN). The CAE is trained to extract visual features from raw images captured by a camera. The MTRNN is trained to integrate sensory-motor information and to predict next states. We implement this model on a robot and conducted sequence to sequence learning that allows the robot to transform demonstrator actions into robot actions. Through training of the proposedmodel, representations of actions, manipulated objects, and positional relations are formed in the hierarchical structure of the MTRNN. After training, we confirm capability for generating unlearned imitative patterns.

本文言語English
論文番号46
ジャーナルFrontiers in Neurorobotics
12
DOI
出版ステータスPublished - 2018

ASJC Scopus subject areas

  • 生体医工学
  • 人工知能

フィンガープリント

「Acquisition of viewpoint transformation and action mappings via sequence to sequence imitative learning by deep neural networks」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル