Autonomous online generation of a motor representation of the workspace for intelligent whole-body reaching

Lorenzo Jamone, Martim Brandao, Lorenzo Natale, Kenji Hashimoto, Giulio Sandini, Atsuo Takanishi

    Research output: Contribution to journalArticle

    15 Citations (Scopus)

    Abstract

    We describe a learning strategy that allows a humanoid robot to autonomously build a representation of its workspace: we call this representation Reachable Space Map. Interestingly, the robot can use this map to: (i) estimate the Reachability of a visually detected object (i.e. judge whether the object can be reached for, and how well, according to some performance metric) and (ii) modify its body posture or its position with respect to the object to achieve better reaching. The robot learns this map incrementally during the execution of goal-directed reaching movements; reaching control employs kinematic models that are updated online as well. Our solution is innovative with respect to previous works in three aspects: the robot workspace is described using a gaze-centered motor representation, the map is built incrementally during the execution of goal-directed actions, learning is autonomous and online. We implement our strategy on the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm, waist, legs).

    Original languageEnglish
    Pages (from-to)556-567
    Number of pages12
    JournalRobotics and Autonomous Systems
    Volume62
    Issue number4
    DOIs
    Publication statusPublished - 2014 Apr

      Fingerprint

    Keywords

    • Bio-inspired robotics
    • Humanoid robots
    • Kinematic workspace
    • Online sensorimotor learning
    • Whole-body reaching

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Computer Science Applications
    • Software
    • Mathematics(all)

    Cite this