Tactile object recognition using deep learning and dropout

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    33 Citations (Scopus)

    Abstract

    Recognizing grasped objects with tactile sensors is beneficial in many situations, as other sensor information like vision is not always reliable. In this paper, we aim for multimodal object recognition by power grasping of objects with an unknown orientation and position relation to the hand. Few robots have the necessary tactile sensors to reliably recognize objects: in this study the multifingered hand of TWENDY-ONE is used, which has distributed skin sensors covering most of the hand, 6 axis F/T sensors in each fingertip, and provides information about the joint angles. Moreover, the hand is compliant. When using tactile sensors, it is not clear what kinds of features are useful for object recognition. Recently, deep learning has shown promising results. Nevertheless, deep learning has rarely been used in robotics and to our best knowledge never for tactile sensing, probably because it is difficult to gather many samples with tactile sensors. Our results show a clear improvement when using a denoising autoencoder with dropout compared to traditional neural networks. Nevertheless, a higher number of layers did not prove to be beneficial.

    Original languageEnglish
    Title of host publicationIEEE-RAS International Conference on Humanoid Robots
    PublisherIEEE Computer Society
    Pages1044-1050
    Number of pages7
    Volume2015-February
    ISBN (Print)9781479971749
    DOIs
    Publication statusPublished - 2015 Feb 12
    Event2014 14th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2014 - Madrid, Spain
    Duration: 2014 Nov 182014 Nov 20

    Other

    Other2014 14th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2014
    CountrySpain
    CityMadrid
    Period14/11/1814/11/20

    Fingerprint

    Object recognition
    Sensors
    End effectors
    Deep learning
    Skin
    Robotics
    Robots
    Neural networks

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Computer Vision and Pattern Recognition
    • Hardware and Architecture
    • Human-Computer Interaction
    • Electrical and Electronic Engineering

    Cite this

    Schmitz, A., Bansho, Y., Noda, K., Iwata, H., Ogata, T., & Sugano, S. (2015). Tactile object recognition using deep learning and dropout. In IEEE-RAS International Conference on Humanoid Robots (Vol. 2015-February, pp. 1044-1050). [7041493] IEEE Computer Society. https://doi.org/10.1109/HUMANOIDS.2014.7041493

    Tactile object recognition using deep learning and dropout. / Schmitz, Alexander; Bansho, Yusuke; Noda, Kuniaki; Iwata, Hiroyasu; Ogata, Tetsuya; Sugano, Shigeki.

    IEEE-RAS International Conference on Humanoid Robots. Vol. 2015-February IEEE Computer Society, 2015. p. 1044-1050 7041493.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Schmitz, A, Bansho, Y, Noda, K, Iwata, H, Ogata, T & Sugano, S 2015, Tactile object recognition using deep learning and dropout. in IEEE-RAS International Conference on Humanoid Robots. vol. 2015-February, 7041493, IEEE Computer Society, pp. 1044-1050, 2014 14th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2014, Madrid, Spain, 14/11/18. https://doi.org/10.1109/HUMANOIDS.2014.7041493
    Schmitz A, Bansho Y, Noda K, Iwata H, Ogata T, Sugano S. Tactile object recognition using deep learning and dropout. In IEEE-RAS International Conference on Humanoid Robots. Vol. 2015-February. IEEE Computer Society. 2015. p. 1044-1050. 7041493 https://doi.org/10.1109/HUMANOIDS.2014.7041493
    Schmitz, Alexander ; Bansho, Yusuke ; Noda, Kuniaki ; Iwata, Hiroyasu ; Ogata, Tetsuya ; Sugano, Shigeki. / Tactile object recognition using deep learning and dropout. IEEE-RAS International Conference on Humanoid Robots. Vol. 2015-February IEEE Computer Society, 2015. pp. 1044-1050
    @inproceedings{6a2b1056eaed46c5a663e7a6f43a4b77,
    title = "Tactile object recognition using deep learning and dropout",
    abstract = "Recognizing grasped objects with tactile sensors is beneficial in many situations, as other sensor information like vision is not always reliable. In this paper, we aim for multimodal object recognition by power grasping of objects with an unknown orientation and position relation to the hand. Few robots have the necessary tactile sensors to reliably recognize objects: in this study the multifingered hand of TWENDY-ONE is used, which has distributed skin sensors covering most of the hand, 6 axis F/T sensors in each fingertip, and provides information about the joint angles. Moreover, the hand is compliant. When using tactile sensors, it is not clear what kinds of features are useful for object recognition. Recently, deep learning has shown promising results. Nevertheless, deep learning has rarely been used in robotics and to our best knowledge never for tactile sensing, probably because it is difficult to gather many samples with tactile sensors. Our results show a clear improvement when using a denoising autoencoder with dropout compared to traditional neural networks. Nevertheless, a higher number of layers did not prove to be beneficial.",
    author = "Alexander Schmitz and Yusuke Bansho and Kuniaki Noda and Hiroyasu Iwata and Tetsuya Ogata and Shigeki Sugano",
    year = "2015",
    month = "2",
    day = "12",
    doi = "10.1109/HUMANOIDS.2014.7041493",
    language = "English",
    isbn = "9781479971749",
    volume = "2015-February",
    pages = "1044--1050",
    booktitle = "IEEE-RAS International Conference on Humanoid Robots",
    publisher = "IEEE Computer Society",

    }

    TY - GEN

    T1 - Tactile object recognition using deep learning and dropout

    AU - Schmitz, Alexander

    AU - Bansho, Yusuke

    AU - Noda, Kuniaki

    AU - Iwata, Hiroyasu

    AU - Ogata, Tetsuya

    AU - Sugano, Shigeki

    PY - 2015/2/12

    Y1 - 2015/2/12

    N2 - Recognizing grasped objects with tactile sensors is beneficial in many situations, as other sensor information like vision is not always reliable. In this paper, we aim for multimodal object recognition by power grasping of objects with an unknown orientation and position relation to the hand. Few robots have the necessary tactile sensors to reliably recognize objects: in this study the multifingered hand of TWENDY-ONE is used, which has distributed skin sensors covering most of the hand, 6 axis F/T sensors in each fingertip, and provides information about the joint angles. Moreover, the hand is compliant. When using tactile sensors, it is not clear what kinds of features are useful for object recognition. Recently, deep learning has shown promising results. Nevertheless, deep learning has rarely been used in robotics and to our best knowledge never for tactile sensing, probably because it is difficult to gather many samples with tactile sensors. Our results show a clear improvement when using a denoising autoencoder with dropout compared to traditional neural networks. Nevertheless, a higher number of layers did not prove to be beneficial.

    AB - Recognizing grasped objects with tactile sensors is beneficial in many situations, as other sensor information like vision is not always reliable. In this paper, we aim for multimodal object recognition by power grasping of objects with an unknown orientation and position relation to the hand. Few robots have the necessary tactile sensors to reliably recognize objects: in this study the multifingered hand of TWENDY-ONE is used, which has distributed skin sensors covering most of the hand, 6 axis F/T sensors in each fingertip, and provides information about the joint angles. Moreover, the hand is compliant. When using tactile sensors, it is not clear what kinds of features are useful for object recognition. Recently, deep learning has shown promising results. Nevertheless, deep learning has rarely been used in robotics and to our best knowledge never for tactile sensing, probably because it is difficult to gather many samples with tactile sensors. Our results show a clear improvement when using a denoising autoencoder with dropout compared to traditional neural networks. Nevertheless, a higher number of layers did not prove to be beneficial.

    UR - http://www.scopus.com/inward/record.url?scp=84945179931&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84945179931&partnerID=8YFLogxK

    U2 - 10.1109/HUMANOIDS.2014.7041493

    DO - 10.1109/HUMANOIDS.2014.7041493

    M3 - Conference contribution

    AN - SCOPUS:84945179931

    SN - 9781479971749

    VL - 2015-February

    SP - 1044

    EP - 1050

    BT - IEEE-RAS International Conference on Humanoid Robots

    PB - IEEE Computer Society

    ER -