Morphology-specific convolutional neural networks for tactile object recognition with a multi-fingered hand

Satoshi Funabashi, Gang Yan, Andreas Geier, Alexander Schmitz, Tetsuya Ogata, Shigeki Sugano

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Distributed tactile sensors on multi-fingered hands can provide high-dimensional information for grasping objects, but it is not clear how to optimally process such abundant tactile information. The current paper explores the possibility of using a morphology-specific convolutional neural network (MS-CNN). uSkin tactile sensors are mounted on an Allegro Hand, which provides 720 force measurements (15 patches of uSkin modules with 16 triaxial force sensors each) in addition to 16 joint angle measurements. Consecutive layers in the CNN get input from parts of one finger segment, one finger, and the whole hand. Since the sensors give 3D (x, y, z) vector tactile information, inputs with 3 channels (x, y and z) are used in the first layer, based on the idea of such inputs for RGB images from cameras. Overall, the layers are combined, resulting in the building of a tactile map based on the relative position of the tactile sensors on the hand. Seven different combination variations were evaluated, and an over-95% object recognition rate with 20 objects was achieved, even though only one random time instance from a repeated squeezing motion of an object in an unknown pose within the hand was used as input.

Original languageEnglish
Title of host publication2019 International Conference on Robotics and Automation, ICRA 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages57-63
Number of pages7
ISBN (Electronic)9781538660263
DOIs
Publication statusPublished - 2019 May 1
Event2019 International Conference on Robotics and Automation, ICRA 2019 - Montreal, Canada
Duration: 2019 May 202019 May 24

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
Volume2019-May
ISSN (Print)1050-4729

Conference

Conference2019 International Conference on Robotics and Automation, ICRA 2019
CountryCanada
CityMontreal
Period19/5/2019/5/24

Fingerprint

Object recognition
Neural networks
Sensors
Force measurement
Angle measurement
Cameras

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Cite this

Funabashi, S., Yan, G., Geier, A., Schmitz, A., Ogata, T., & Sugano, S. (2019). Morphology-specific convolutional neural networks for tactile object recognition with a multi-fingered hand. In 2019 International Conference on Robotics and Automation, ICRA 2019 (pp. 57-63). [8793901] (Proceedings - IEEE International Conference on Robotics and Automation; Vol. 2019-May). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICRA.2019.8793901

Morphology-specific convolutional neural networks for tactile object recognition with a multi-fingered hand. / Funabashi, Satoshi; Yan, Gang; Geier, Andreas; Schmitz, Alexander; Ogata, Tetsuya; Sugano, Shigeki.

2019 International Conference on Robotics and Automation, ICRA 2019. Institute of Electrical and Electronics Engineers Inc., 2019. p. 57-63 8793901 (Proceedings - IEEE International Conference on Robotics and Automation; Vol. 2019-May).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Funabashi, S, Yan, G, Geier, A, Schmitz, A, Ogata, T & Sugano, S 2019, Morphology-specific convolutional neural networks for tactile object recognition with a multi-fingered hand. in 2019 International Conference on Robotics and Automation, ICRA 2019., 8793901, Proceedings - IEEE International Conference on Robotics and Automation, vol. 2019-May, Institute of Electrical and Electronics Engineers Inc., pp. 57-63, 2019 International Conference on Robotics and Automation, ICRA 2019, Montreal, Canada, 19/5/20. https://doi.org/10.1109/ICRA.2019.8793901
Funabashi S, Yan G, Geier A, Schmitz A, Ogata T, Sugano S. Morphology-specific convolutional neural networks for tactile object recognition with a multi-fingered hand. In 2019 International Conference on Robotics and Automation, ICRA 2019. Institute of Electrical and Electronics Engineers Inc. 2019. p. 57-63. 8793901. (Proceedings - IEEE International Conference on Robotics and Automation). https://doi.org/10.1109/ICRA.2019.8793901
Funabashi, Satoshi ; Yan, Gang ; Geier, Andreas ; Schmitz, Alexander ; Ogata, Tetsuya ; Sugano, Shigeki. / Morphology-specific convolutional neural networks for tactile object recognition with a multi-fingered hand. 2019 International Conference on Robotics and Automation, ICRA 2019. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 57-63 (Proceedings - IEEE International Conference on Robotics and Automation).
@inproceedings{735916b2f9094e78a4120047e1522ebd,
title = "Morphology-specific convolutional neural networks for tactile object recognition with a multi-fingered hand",
abstract = "Distributed tactile sensors on multi-fingered hands can provide high-dimensional information for grasping objects, but it is not clear how to optimally process such abundant tactile information. The current paper explores the possibility of using a morphology-specific convolutional neural network (MS-CNN). uSkin tactile sensors are mounted on an Allegro Hand, which provides 720 force measurements (15 patches of uSkin modules with 16 triaxial force sensors each) in addition to 16 joint angle measurements. Consecutive layers in the CNN get input from parts of one finger segment, one finger, and the whole hand. Since the sensors give 3D (x, y, z) vector tactile information, inputs with 3 channels (x, y and z) are used in the first layer, based on the idea of such inputs for RGB images from cameras. Overall, the layers are combined, resulting in the building of a tactile map based on the relative position of the tactile sensors on the hand. Seven different combination variations were evaluated, and an over-95{\%} object recognition rate with 20 objects was achieved, even though only one random time instance from a repeated squeezing motion of an object in an unknown pose within the hand was used as input.",
author = "Satoshi Funabashi and Gang Yan and Andreas Geier and Alexander Schmitz and Tetsuya Ogata and Shigeki Sugano",
year = "2019",
month = "5",
day = "1",
doi = "10.1109/ICRA.2019.8793901",
language = "English",
series = "Proceedings - IEEE International Conference on Robotics and Automation",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "57--63",
booktitle = "2019 International Conference on Robotics and Automation, ICRA 2019",

}

TY - GEN

T1 - Morphology-specific convolutional neural networks for tactile object recognition with a multi-fingered hand

AU - Funabashi, Satoshi

AU - Yan, Gang

AU - Geier, Andreas

AU - Schmitz, Alexander

AU - Ogata, Tetsuya

AU - Sugano, Shigeki

PY - 2019/5/1

Y1 - 2019/5/1

N2 - Distributed tactile sensors on multi-fingered hands can provide high-dimensional information for grasping objects, but it is not clear how to optimally process such abundant tactile information. The current paper explores the possibility of using a morphology-specific convolutional neural network (MS-CNN). uSkin tactile sensors are mounted on an Allegro Hand, which provides 720 force measurements (15 patches of uSkin modules with 16 triaxial force sensors each) in addition to 16 joint angle measurements. Consecutive layers in the CNN get input from parts of one finger segment, one finger, and the whole hand. Since the sensors give 3D (x, y, z) vector tactile information, inputs with 3 channels (x, y and z) are used in the first layer, based on the idea of such inputs for RGB images from cameras. Overall, the layers are combined, resulting in the building of a tactile map based on the relative position of the tactile sensors on the hand. Seven different combination variations were evaluated, and an over-95% object recognition rate with 20 objects was achieved, even though only one random time instance from a repeated squeezing motion of an object in an unknown pose within the hand was used as input.

AB - Distributed tactile sensors on multi-fingered hands can provide high-dimensional information for grasping objects, but it is not clear how to optimally process such abundant tactile information. The current paper explores the possibility of using a morphology-specific convolutional neural network (MS-CNN). uSkin tactile sensors are mounted on an Allegro Hand, which provides 720 force measurements (15 patches of uSkin modules with 16 triaxial force sensors each) in addition to 16 joint angle measurements. Consecutive layers in the CNN get input from parts of one finger segment, one finger, and the whole hand. Since the sensors give 3D (x, y, z) vector tactile information, inputs with 3 channels (x, y and z) are used in the first layer, based on the idea of such inputs for RGB images from cameras. Overall, the layers are combined, resulting in the building of a tactile map based on the relative position of the tactile sensors on the hand. Seven different combination variations were evaluated, and an over-95% object recognition rate with 20 objects was achieved, even though only one random time instance from a repeated squeezing motion of an object in an unknown pose within the hand was used as input.

UR - http://www.scopus.com/inward/record.url?scp=85071479203&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071479203&partnerID=8YFLogxK

U2 - 10.1109/ICRA.2019.8793901

DO - 10.1109/ICRA.2019.8793901

M3 - Conference contribution

T3 - Proceedings - IEEE International Conference on Robotics and Automation

SP - 57

EP - 63

BT - 2019 International Conference on Robotics and Automation, ICRA 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -