Abstract

Detecting the onset/ongoing of slip, i.e. if a grasped object is slipping or will slip from the gripper while being lifted, is crucial. Conventionally, it is regarded as a tactile sensing related problem. However, recently multi-modal robotic learning has become popular and is expected to boost the performance. In this paper we propose a novel CNN-TCN model to fuse tactile and visual information for detecting the onset/ongoing of slip. In our experiments, two uSkin tactile sensors and one Realsense435i camera are used. Data is collected by randomly grasping and lifting 35 daily objects 1050 times in total. Furthermore, we compare our CNN-TCN model with the widely used CNN-LSTM model. As a result, our proposed model achieves a 88.75% detection accuracy and outperforms the CNN-LSTM model combined with different pretrained vision networks.

Original languageEnglish
Title of host publication2022 IEEE International Conference on Robotics and Automation, ICRA 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3537-3543
Number of pages7
ISBN (Electronic)9781728196817
DOIs
Publication statusPublished - 2022
Event39th IEEE International Conference on Robotics and Automation, ICRA 2022 - Philadelphia, United States
Duration: 2022 May 232022 May 27

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
ISSN (Print)1050-4729

Conference

Conference39th IEEE International Conference on Robotics and Automation, ICRA 2022
Country/TerritoryUnited States
CityPhiladelphia
Period22/5/2322/5/27

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Detection of Slip from Vision and Touch'. Together they form a unique fingerprint.

Cite this