Leveraging motor babbling for efficient robot learning

Kei Kase*, Noboru Matsumoto, Tetsuya Ogata

*この研究の対応する著者

研究成果: Article査読

1 被引用数 (Scopus)

抄録

Deep robotic learning by learning from demonstration allows robots to mimic a given demonstration and generalize their performance to unknown task setups. However, this generalization ability is heavily affected by the number of demonstrations, which can be costly to manually generate. Without sufficient demonstrations, robots tend to overfit to the available demonstrations and lose the robustness offered by deep learning. Applying the concept of motor babbling – a process similar to that by which human infants move their bodies randomly to obtain proprioception – is also effective for allowing robots to enhance their generalization ability. Furthermore, the generation of babbling data is simpler than task-oriented demonstrations. Previous researches use motor babbling in the concept of pre-training and fine-tuning but have the problem of the babbling data being overwritten by the task data. In this work, we propose an RNNbased robot-control framework capable of leveraging targetless babbling data to aid the robot in acquiring proprioception and increasing the generalization ability of the learned task data by learning both babbling and task data simultaneously. Through simultaneous learning, our framework can use the dynamics obtained from babbling data to learn the target task efficiently. In the experiment, we prepare demonstrations of a block-picking task and aimless-babbling data. With our framework, the robot can learn tasks faster and show greater generalization ability when blocks are at unknown positions or move during execution.

本文言語English
ページ(範囲)1063-1074
ページ数12
ジャーナルJournal of Robotics and Mechatronics
33
5
DOI
出版ステータスPublished - 2021 10月

ASJC Scopus subject areas

  • コンピュータ サイエンス(全般)
  • 電子工学および電気工学

フィンガープリント

「Leveraging motor babbling for efficient robot learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル