Dynamic perception after visually guided grasping by a human-like autonomous robot

Mototaka Suzuki, Kuniaki Noda, Yuki Suga, Tetsuya Ogata, Shigeki Sugano

    Research output: Contribution to journalArticle

    Abstract

    We will explore dynamic perception following the visually guided grasping of several objects by a human-like autonomous robot. This competency serves for object categorization. Physical interaction with the hand-held object gives the neural network of the robot the rich, coherent and multi-modal sensory input. Multi-layered self-organizing maps are designed and examined in static and dynamic conditions. The results of the tests in the former condition show its capability of robust categorization against noise. The network also shows better performance than a single-layered map does. In the latter condition we focus on shaking behavior by moving only the forearm of the robot. In some combinations of grasping style and shaking radius the network is capable of categorizing two objects robustly. The results show that the network capability to achieve the task largely depends on how to grasp and how to move the objects. These results together with a preliminary simulation are promising toward the self-organization of a high degree of autonomous dynamic object categorization.

    Original languageEnglish
    Pages (from-to)233-254
    Number of pages22
    JournalAdvanced Robotics
    Volume20
    Issue number2
    DOIs
    Publication statusPublished - 2006

    Fingerprint

    Robots
    Self organizing maps
    Neural networks

    Keywords

    • DYNAMIC PERCEPTION
    • GRASPING
    • MANIPULATION
    • OBJECT CATEGORIZATION
    • SELF-ORGANIZING MAPS

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Software
    • Human-Computer Interaction
    • Hardware and Architecture
    • Computer Science Applications

    Cite this

    Dynamic perception after visually guided grasping by a human-like autonomous robot. / Suzuki, Mototaka; Noda, Kuniaki; Suga, Yuki; Ogata, Tetsuya; Sugano, Shigeki.

    In: Advanced Robotics, Vol. 20, No. 2, 2006, p. 233-254.

    Research output: Contribution to journalArticle

    Suzuki, Mototaka ; Noda, Kuniaki ; Suga, Yuki ; Ogata, Tetsuya ; Sugano, Shigeki. / Dynamic perception after visually guided grasping by a human-like autonomous robot. In: Advanced Robotics. 2006 ; Vol. 20, No. 2. pp. 233-254.
    @article{1ac2872b388f4011b1f90baf2f5c9f0a,
    title = "Dynamic perception after visually guided grasping by a human-like autonomous robot",
    abstract = "We will explore dynamic perception following the visually guided grasping of several objects by a human-like autonomous robot. This competency serves for object categorization. Physical interaction with the hand-held object gives the neural network of the robot the rich, coherent and multi-modal sensory input. Multi-layered self-organizing maps are designed and examined in static and dynamic conditions. The results of the tests in the former condition show its capability of robust categorization against noise. The network also shows better performance than a single-layered map does. In the latter condition we focus on shaking behavior by moving only the forearm of the robot. In some combinations of grasping style and shaking radius the network is capable of categorizing two objects robustly. The results show that the network capability to achieve the task largely depends on how to grasp and how to move the objects. These results together with a preliminary simulation are promising toward the self-organization of a high degree of autonomous dynamic object categorization.",
    keywords = "DYNAMIC PERCEPTION, GRASPING, MANIPULATION, OBJECT CATEGORIZATION, SELF-ORGANIZING MAPS",
    author = "Mototaka Suzuki and Kuniaki Noda and Yuki Suga and Tetsuya Ogata and Shigeki Sugano",
    year = "2006",
    doi = "10.1163/156855306775525785",
    language = "English",
    volume = "20",
    pages = "233--254",
    journal = "Advanced Robotics",
    issn = "0169-1864",
    publisher = "Taylor and Francis Ltd.",
    number = "2",

    }

    TY - JOUR

    T1 - Dynamic perception after visually guided grasping by a human-like autonomous robot

    AU - Suzuki, Mototaka

    AU - Noda, Kuniaki

    AU - Suga, Yuki

    AU - Ogata, Tetsuya

    AU - Sugano, Shigeki

    PY - 2006

    Y1 - 2006

    N2 - We will explore dynamic perception following the visually guided grasping of several objects by a human-like autonomous robot. This competency serves for object categorization. Physical interaction with the hand-held object gives the neural network of the robot the rich, coherent and multi-modal sensory input. Multi-layered self-organizing maps are designed and examined in static and dynamic conditions. The results of the tests in the former condition show its capability of robust categorization against noise. The network also shows better performance than a single-layered map does. In the latter condition we focus on shaking behavior by moving only the forearm of the robot. In some combinations of grasping style and shaking radius the network is capable of categorizing two objects robustly. The results show that the network capability to achieve the task largely depends on how to grasp and how to move the objects. These results together with a preliminary simulation are promising toward the self-organization of a high degree of autonomous dynamic object categorization.

    AB - We will explore dynamic perception following the visually guided grasping of several objects by a human-like autonomous robot. This competency serves for object categorization. Physical interaction with the hand-held object gives the neural network of the robot the rich, coherent and multi-modal sensory input. Multi-layered self-organizing maps are designed and examined in static and dynamic conditions. The results of the tests in the former condition show its capability of robust categorization against noise. The network also shows better performance than a single-layered map does. In the latter condition we focus on shaking behavior by moving only the forearm of the robot. In some combinations of grasping style and shaking radius the network is capable of categorizing two objects robustly. The results show that the network capability to achieve the task largely depends on how to grasp and how to move the objects. These results together with a preliminary simulation are promising toward the self-organization of a high degree of autonomous dynamic object categorization.

    KW - DYNAMIC PERCEPTION

    KW - GRASPING

    KW - MANIPULATION

    KW - OBJECT CATEGORIZATION

    KW - SELF-ORGANIZING MAPS

    UR - http://www.scopus.com/inward/record.url?scp=85024214428&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85024214428&partnerID=8YFLogxK

    U2 - 10.1163/156855306775525785

    DO - 10.1163/156855306775525785

    M3 - Article

    VL - 20

    SP - 233

    EP - 254

    JO - Advanced Robotics

    JF - Advanced Robotics

    SN - 0169-1864

    IS - 2

    ER -