Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning

Pin Chu Yang, Kazuma Sasaki, Kanata Suzuki, Kei Kase, Shigeki Sugano, Tetsuya Ogata

Research output: Contribution to journalArticle

25 Citations (Scopus)

Abstract

We propose a practical state-of-the-art method to develop a machine-learning-based humanoid robot that can work as a production line worker. The proposed approach provides an intuitive way to collect data and exhibits the following characteristics: task performing capability, task reiteration ability, generalizability, and easy applicability. The proposed approach utilizes a real-time user interface with a monitor and provides a first-person perspective using a head-mounted display. Through this interface, teleoperation is used for collecting task operating data, especially for tasks that are difficult to be applied with a conventional method. A two-phase deep learning model is also utilized in the proposed approach. A deep convolutional autoencoder extracts images features and reconstructs images, and a fully connected deep time delay neural network learns the dynamics of a robot task process from the extracted image features and motion angle signals. The 'Nextage Open' humanoid robot is used as an experimental platform to evaluate the proposed model. The object folding task utilizing with 35 trained and 5 untrained sensory motor sequences for test. Testing the trained model with online generation demonstrates a 77.8% success rate for the object folding task.

Original languageEnglish
Article number7762066
Pages (from-to)397-403
Number of pages7
JournalIEEE Robotics and Automation Letters
Volume2
Issue number2
DOIs
Publication statusPublished - 2017 Apr 1

Fingerprint

Humanoid Robot
Folding
Robots
Teleoperation
Production Line
Remote control
User Interface
User interfaces
Learning systems
Intuitive
Time Delay
Time delay
Display
Machine Learning
Person
Monitor
Robot
Display devices
Model
Neural Networks

Keywords

  • Humanoid robots
  • learning and adaptive systems
  • motion control of manipulators
  • neurorobotics

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Human-Computer Interaction
  • Biomedical Engineering
  • Mechanical Engineering
  • Control and Optimization
  • Artificial Intelligence
  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Cite this

Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning. / Yang, Pin Chu; Sasaki, Kazuma; Suzuki, Kanata; Kase, Kei; Sugano, Shigeki; Ogata, Tetsuya.

In: IEEE Robotics and Automation Letters, Vol. 2, No. 2, 7762066, 01.04.2017, p. 397-403.

Research output: Contribution to journalArticle

Yang, Pin Chu ; Sasaki, Kazuma ; Suzuki, Kanata ; Kase, Kei ; Sugano, Shigeki ; Ogata, Tetsuya. / Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning. In: IEEE Robotics and Automation Letters. 2017 ; Vol. 2, No. 2. pp. 397-403.
@article{c486614472b84602bc93f7ba964e9eee,
title = "Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning",
abstract = "We propose a practical state-of-the-art method to develop a machine-learning-based humanoid robot that can work as a production line worker. The proposed approach provides an intuitive way to collect data and exhibits the following characteristics: task performing capability, task reiteration ability, generalizability, and easy applicability. The proposed approach utilizes a real-time user interface with a monitor and provides a first-person perspective using a head-mounted display. Through this interface, teleoperation is used for collecting task operating data, especially for tasks that are difficult to be applied with a conventional method. A two-phase deep learning model is also utilized in the proposed approach. A deep convolutional autoencoder extracts images features and reconstructs images, and a fully connected deep time delay neural network learns the dynamics of a robot task process from the extracted image features and motion angle signals. The 'Nextage Open' humanoid robot is used as an experimental platform to evaluate the proposed model. The object folding task utilizing with 35 trained and 5 untrained sensory motor sequences for test. Testing the trained model with online generation demonstrates a 77.8{\%} success rate for the object folding task.",
keywords = "Humanoid robots, learning and adaptive systems, motion control of manipulators, neurorobotics",
author = "Yang, {Pin Chu} and Kazuma Sasaki and Kanata Suzuki and Kei Kase and Shigeki Sugano and Tetsuya Ogata",
year = "2017",
month = "4",
day = "1",
doi = "10.1109/LRA.2016.2633383",
language = "English",
volume = "2",
pages = "397--403",
journal = "IEEE Robotics and Automation Letters",
issn = "2377-3766",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "2",

}

TY - JOUR

T1 - Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning

AU - Yang, Pin Chu

AU - Sasaki, Kazuma

AU - Suzuki, Kanata

AU - Kase, Kei

AU - Sugano, Shigeki

AU - Ogata, Tetsuya

PY - 2017/4/1

Y1 - 2017/4/1

N2 - We propose a practical state-of-the-art method to develop a machine-learning-based humanoid robot that can work as a production line worker. The proposed approach provides an intuitive way to collect data and exhibits the following characteristics: task performing capability, task reiteration ability, generalizability, and easy applicability. The proposed approach utilizes a real-time user interface with a monitor and provides a first-person perspective using a head-mounted display. Through this interface, teleoperation is used for collecting task operating data, especially for tasks that are difficult to be applied with a conventional method. A two-phase deep learning model is also utilized in the proposed approach. A deep convolutional autoencoder extracts images features and reconstructs images, and a fully connected deep time delay neural network learns the dynamics of a robot task process from the extracted image features and motion angle signals. The 'Nextage Open' humanoid robot is used as an experimental platform to evaluate the proposed model. The object folding task utilizing with 35 trained and 5 untrained sensory motor sequences for test. Testing the trained model with online generation demonstrates a 77.8% success rate for the object folding task.

AB - We propose a practical state-of-the-art method to develop a machine-learning-based humanoid robot that can work as a production line worker. The proposed approach provides an intuitive way to collect data and exhibits the following characteristics: task performing capability, task reiteration ability, generalizability, and easy applicability. The proposed approach utilizes a real-time user interface with a monitor and provides a first-person perspective using a head-mounted display. Through this interface, teleoperation is used for collecting task operating data, especially for tasks that are difficult to be applied with a conventional method. A two-phase deep learning model is also utilized in the proposed approach. A deep convolutional autoencoder extracts images features and reconstructs images, and a fully connected deep time delay neural network learns the dynamics of a robot task process from the extracted image features and motion angle signals. The 'Nextage Open' humanoid robot is used as an experimental platform to evaluate the proposed model. The object folding task utilizing with 35 trained and 5 untrained sensory motor sequences for test. Testing the trained model with online generation demonstrates a 77.8% success rate for the object folding task.

KW - Humanoid robots

KW - learning and adaptive systems

KW - motion control of manipulators

KW - neurorobotics

UR - http://www.scopus.com/inward/record.url?scp=85034700187&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85034700187&partnerID=8YFLogxK

U2 - 10.1109/LRA.2016.2633383

DO - 10.1109/LRA.2016.2633383

M3 - Article

VL - 2

SP - 397

EP - 403

JO - IEEE Robotics and Automation Letters

JF - IEEE Robotics and Automation Letters

SN - 2377-3766

IS - 2

M1 - 7762066

ER -