Detecting features of tools, objects, and actions from effects in a robot using deep learning

Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya Ogata, Shigeki Sugano

Research output: Contribution to journalArticlepeer-review

Abstract

We propose a tool-use model that can detect the features of tools, target objects, and actions from the provided effects of object manipulation. We construct a model that enables robots to manipulate objects with tools, using infant learning as a concept. To realize this, we train sensory-motor data recorded during a tool-use task performed by a robot with deep learning. Experiments include four factors: (1) tools, (2) objects, (3) actions, and (4) effects, which the model considers simultaneously. For evaluation, the robot gen- erates predicted images and motions given information of the effects of using unknown tools and objects. We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2018 Sep 23

Keywords

  • Cognitive robotics
  • Development of infants
  • Neural network
  • Tool-use

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Detecting features of tools, objects, and actions from effects in a robot using deep learning'. Together they form a unique fingerprint.

Cite this