TY - GEN
T1 - In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning
AU - Suzuki, Kanata
AU - Kanamura, Momomi
AU - Suga, Yuki
AU - Mori, Hiroki
AU - Ogata, Tetsuya
N1 - Funding Information:
ACKNOWLEDGMENT This work was based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). And also, this work was supported by JST, ACT-X Grant Number JPMJAX190I, Japan.
Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - In this study, we report the successful execution of in-air knotting of rope using a dual-arm two-finger robot based on deep learning. Owing to its flexibility, the state of the rope was in constant flux during the operation of the robot. This required the robot control system to dynamically correspond to the state of the object at all times. However, a manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance. To resolve this issue, we constructed a model that instructed the robot to perform bowknots and overhand knots based on two deep neural networks trained using the data gathered from its sensorimotor, including visual and proximity sensors. The resultant model was verified to be capable of predicting the appropriate robot motions based on the sensory information available online. In addition, we designed certain task motions based on the Ian knot method using the dual-arm two-fingers robot. The designed knotting motions do not require a dedicated workbench or robot hand, thereby enhancing the versatility of the proposed method. Finally, experiments were performed to estimate the knotting performance of the real robot while executing overhand knots and bowknots on rope and its success rate. The experimental results established the effectiveness and high performance of the proposed method.
AB - In this study, we report the successful execution of in-air knotting of rope using a dual-arm two-finger robot based on deep learning. Owing to its flexibility, the state of the rope was in constant flux during the operation of the robot. This required the robot control system to dynamically correspond to the state of the object at all times. However, a manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance. To resolve this issue, we constructed a model that instructed the robot to perform bowknots and overhand knots based on two deep neural networks trained using the data gathered from its sensorimotor, including visual and proximity sensors. The resultant model was verified to be capable of predicting the appropriate robot motions based on the sensory information available online. In addition, we designed certain task motions based on the Ian knot method using the dual-arm two-fingers robot. The designed knotting motions do not require a dedicated workbench or robot hand, thereby enhancing the versatility of the proposed method. Finally, experiments were performed to estimate the knotting performance of the real robot while executing overhand knots and bowknots on rope and its success rate. The experimental results established the effectiveness and high performance of the proposed method.
UR - http://www.scopus.com/inward/record.url?scp=85121835026&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85121835026&partnerID=8YFLogxK
U2 - 10.1109/IROS51168.2021.9635954
DO - 10.1109/IROS51168.2021.9635954
M3 - Conference contribution
AN - SCOPUS:85121835026
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 6724
EP - 6731
BT - IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021
Y2 - 27 September 2021 through 1 October 2021
ER -