Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning

Kei Kase, Kanata Suzuki, Pin Chu Yang, Hiroki Mori, Tetsuya Ogata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

For robots to have a wide range of applications, they must be able to execute numerous tasks. However, recent studies into robot manipulation using deep neural networks (DNN) have primarily focused on single tasks. Therefore, we investigate a robot manipulation model that uses DNNs and can execute long sequential dynamic tasks by performing multiple short sequential tasks at appropriate times. To generate compound tasks, we propose a model comprising two DNNs: a convolutional autoencoder that extracts image features and a multiple timescale recurrent neural network (MTRNN) to generate motion. The internal state of the MTRNN is constrained to have similar values at the initial and final motion steps; thus, motions can be differentiated based on the initial image input. As an example compound task, we demonstrate that the robot can generate a 'Put-In-Box' task that is divided into three subtasks: open the box, grasp the object and put it into the box, and close the box. The subtasks were trained as discrete tasks, and the connections between each subtask were not trained. With the proposed model, the robot could perform the Put-In-Box task by switching among subtasks and could skip or repeat subtasks depending on the situation.

Original languageEnglish
Title of host publication2018 IEEE International Conference on Robotics and Automation, ICRA 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6447-6452
Number of pages6
ISBN (Electronic)9781538630815
DOIs
Publication statusPublished - 2018 Sep 10
Event2018 IEEE International Conference on Robotics and Automation, ICRA 2018 - Brisbane, Australia
Duration: 2018 May 212018 May 25

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
ISSN (Print)1050-4729

Conference

Conference2018 IEEE International Conference on Robotics and Automation, ICRA 2018
CountryAustralia
CityBrisbane
Period18/5/2118/5/25

Fingerprint

Robots
Recurrent neural networks
Deep learning

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Cite this

Kase, K., Suzuki, K., Yang, P. C., Mori, H., & Ogata, T. (2018). Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018 (pp. 6447-6452). [8460623] (Proceedings - IEEE International Conference on Robotics and Automation). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICRA.2018.8460623

Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning. / Kase, Kei; Suzuki, Kanata; Yang, Pin Chu; Mori, Hiroki; Ogata, Tetsuya.

2018 IEEE International Conference on Robotics and Automation, ICRA 2018. Institute of Electrical and Electronics Engineers Inc., 2018. p. 6447-6452 8460623 (Proceedings - IEEE International Conference on Robotics and Automation).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kase, K, Suzuki, K, Yang, PC, Mori, H & Ogata, T 2018, Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning. in 2018 IEEE International Conference on Robotics and Automation, ICRA 2018., 8460623, Proceedings - IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers Inc., pp. 6447-6452, 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, 18/5/21. https://doi.org/10.1109/ICRA.2018.8460623
Kase K, Suzuki K, Yang PC, Mori H, Ogata T. Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018. Institute of Electrical and Electronics Engineers Inc. 2018. p. 6447-6452. 8460623. (Proceedings - IEEE International Conference on Robotics and Automation). https://doi.org/10.1109/ICRA.2018.8460623
Kase, Kei ; Suzuki, Kanata ; Yang, Pin Chu ; Mori, Hiroki ; Ogata, Tetsuya. / Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning. 2018 IEEE International Conference on Robotics and Automation, ICRA 2018. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 6447-6452 (Proceedings - IEEE International Conference on Robotics and Automation).
@inproceedings{53e73b05cc0d44719d6535773e4b8acb,
title = "Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning",
abstract = "For robots to have a wide range of applications, they must be able to execute numerous tasks. However, recent studies into robot manipulation using deep neural networks (DNN) have primarily focused on single tasks. Therefore, we investigate a robot manipulation model that uses DNNs and can execute long sequential dynamic tasks by performing multiple short sequential tasks at appropriate times. To generate compound tasks, we propose a model comprising two DNNs: a convolutional autoencoder that extracts image features and a multiple timescale recurrent neural network (MTRNN) to generate motion. The internal state of the MTRNN is constrained to have similar values at the initial and final motion steps; thus, motions can be differentiated based on the initial image input. As an example compound task, we demonstrate that the robot can generate a 'Put-In-Box' task that is divided into three subtasks: open the box, grasp the object and put it into the box, and close the box. The subtasks were trained as discrete tasks, and the connections between each subtask were not trained. With the proposed model, the robot could perform the Put-In-Box task by switching among subtasks and could skip or repeat subtasks depending on the situation.",
author = "Kei Kase and Kanata Suzuki and Yang, {Pin Chu} and Hiroki Mori and Tetsuya Ogata",
year = "2018",
month = "9",
day = "10",
doi = "10.1109/ICRA.2018.8460623",
language = "English",
series = "Proceedings - IEEE International Conference on Robotics and Automation",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "6447--6452",
booktitle = "2018 IEEE International Conference on Robotics and Automation, ICRA 2018",

}

TY - GEN

T1 - Put-in-Box Task Generated from Multiple Discrete Tasks by aHumanoid Robot Using Deep Learning

AU - Kase, Kei

AU - Suzuki, Kanata

AU - Yang, Pin Chu

AU - Mori, Hiroki

AU - Ogata, Tetsuya

PY - 2018/9/10

Y1 - 2018/9/10

N2 - For robots to have a wide range of applications, they must be able to execute numerous tasks. However, recent studies into robot manipulation using deep neural networks (DNN) have primarily focused on single tasks. Therefore, we investigate a robot manipulation model that uses DNNs and can execute long sequential dynamic tasks by performing multiple short sequential tasks at appropriate times. To generate compound tasks, we propose a model comprising two DNNs: a convolutional autoencoder that extracts image features and a multiple timescale recurrent neural network (MTRNN) to generate motion. The internal state of the MTRNN is constrained to have similar values at the initial and final motion steps; thus, motions can be differentiated based on the initial image input. As an example compound task, we demonstrate that the robot can generate a 'Put-In-Box' task that is divided into three subtasks: open the box, grasp the object and put it into the box, and close the box. The subtasks were trained as discrete tasks, and the connections between each subtask were not trained. With the proposed model, the robot could perform the Put-In-Box task by switching among subtasks and could skip or repeat subtasks depending on the situation.

AB - For robots to have a wide range of applications, they must be able to execute numerous tasks. However, recent studies into robot manipulation using deep neural networks (DNN) have primarily focused on single tasks. Therefore, we investigate a robot manipulation model that uses DNNs and can execute long sequential dynamic tasks by performing multiple short sequential tasks at appropriate times. To generate compound tasks, we propose a model comprising two DNNs: a convolutional autoencoder that extracts image features and a multiple timescale recurrent neural network (MTRNN) to generate motion. The internal state of the MTRNN is constrained to have similar values at the initial and final motion steps; thus, motions can be differentiated based on the initial image input. As an example compound task, we demonstrate that the robot can generate a 'Put-In-Box' task that is divided into three subtasks: open the box, grasp the object and put it into the box, and close the box. The subtasks were trained as discrete tasks, and the connections between each subtask were not trained. With the proposed model, the robot could perform the Put-In-Box task by switching among subtasks and could skip or repeat subtasks depending on the situation.

UR - http://www.scopus.com/inward/record.url?scp=85063123703&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063123703&partnerID=8YFLogxK

U2 - 10.1109/ICRA.2018.8460623

DO - 10.1109/ICRA.2018.8460623

M3 - Conference contribution

T3 - Proceedings - IEEE International Conference on Robotics and Automation

SP - 6447

EP - 6452

BT - 2018 IEEE International Conference on Robotics and Automation, ICRA 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -