Wiping 3D-objects using deep learning model based on image/force/joint information

Namiko Saito, Danyang Wang, Tetsuya Ogata, Hiroki Mori, Shigeki Sugano

研究成果: Conference contribution

抄録

We propose a deep learning model for a robot to wipe 3D-objects. Wiping of 3D-objects requires recognizing the shapes of objects and planning the motor angle adjustments for tracing the objects. Unlike previous research, our learning model does not require pre-designed computational models of target objects. The robot is able to wipe the objects to be placed by using image, force, and arm joint information. We evaluate the generalization ability of the model by confirming that the robot handles untrained cube and bowl shaped-objects. We also find that it is necessary to use both image and force information to recognize the shape of and wipe 3D objects consistently by comparing changes in the input sensor data to the model. To our knowledge, this is the first work enabling a robot to use learning sensorimotor information alone to trace various unknown 3D-shape.

本文言語English
ホスト出版物のタイトル2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
出版社Institute of Electrical and Electronics Engineers Inc.
ページ10152-10157
ページ数6
ISBN(電子版)9781728162126
DOI
出版ステータスPublished - 2020 10 24
イベント2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020 - Las Vegas, United States
継続期間: 2020 10 242021 1 24

出版物シリーズ

名前IEEE International Conference on Intelligent Robots and Systems
ISSN(印刷版)2153-0858
ISSN(電子版)2153-0866

Conference

Conference2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
国/地域United States
CityLas Vegas
Period20/10/2421/1/24

ASJC Scopus subject areas

  • 制御およびシステム工学
  • ソフトウェア
  • コンピュータ ビジョンおよびパターン認識
  • コンピュータ サイエンスの応用

フィンガープリント

「Wiping 3D-objects using deep learning model based on image/force/joint information」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル