TY - JOUR
T1 - Autonomous online generation of a motor representation of the workspace for intelligent whole-body reaching
AU - Jamone, Lorenzo
AU - Brandao, Martim
AU - Natale, Lorenzo
AU - Hashimoto, Kenji
AU - Sandini, Giulio
AU - Takanishi, Atsuo
N1 - Funding Information:
Kenji Hashimotois an Assistant Professor of the Research Institute for Science and Engineering, Waseda University, Japan. He received the B.E. and M.E. degrees in Mechanical Engineering from Waseda University, Japan, in 2004 and 2006, respectively. He received the Ph.D. degree in Integrative Bioscience and Biomedical Engineering from Waseda University, Japan, in 2009. While a Ph.D. candidate, he was funded by the Japan Society for the Promotion Science as a Research Fellow. He was a Postdoctoral Researcher at the Laboratoire de Physiologie de la Perception et de l’Action in UMR 7152 College de France-CNRS, France from 2012 to 2013. His research interests include walking systems, biped robots, and humanoid robots.
PY - 2014/4
Y1 - 2014/4
N2 - We describe a learning strategy that allows a humanoid robot to autonomously build a representation of its workspace: we call this representation Reachable Space Map. Interestingly, the robot can use this map to: (i) estimate the Reachability of a visually detected object (i.e. judge whether the object can be reached for, and how well, according to some performance metric) and (ii) modify its body posture or its position with respect to the object to achieve better reaching. The robot learns this map incrementally during the execution of goal-directed reaching movements; reaching control employs kinematic models that are updated online as well. Our solution is innovative with respect to previous works in three aspects: the robot workspace is described using a gaze-centered motor representation, the map is built incrementally during the execution of goal-directed actions, learning is autonomous and online. We implement our strategy on the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm, waist, legs).
AB - We describe a learning strategy that allows a humanoid robot to autonomously build a representation of its workspace: we call this representation Reachable Space Map. Interestingly, the robot can use this map to: (i) estimate the Reachability of a visually detected object (i.e. judge whether the object can be reached for, and how well, according to some performance metric) and (ii) modify its body posture or its position with respect to the object to achieve better reaching. The robot learns this map incrementally during the execution of goal-directed reaching movements; reaching control employs kinematic models that are updated online as well. Our solution is innovative with respect to previous works in three aspects: the robot workspace is described using a gaze-centered motor representation, the map is built incrementally during the execution of goal-directed actions, learning is autonomous and online. We implement our strategy on the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm, waist, legs).
KW - Bio-inspired robotics
KW - Humanoid robots
KW - Kinematic workspace
KW - Online sensorimotor learning
KW - Whole-body reaching
UR - http://www.scopus.com/inward/record.url?scp=84897640150&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84897640150&partnerID=8YFLogxK
U2 - 10.1016/j.robot.2013.12.011
DO - 10.1016/j.robot.2013.12.011
M3 - Article
AN - SCOPUS:84897640150
SN - 0921-8890
VL - 62
SP - 556
EP - 567
JO - Robotics and Autonomous Systems
JF - Robotics and Autonomous Systems
IS - 4
ER -