Visibility Enhancement using Autonomous Multicamera Controls with Situational Role Assignment for Teleoperated Work Machines

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

The aim of this study is to provide a machine operator with enhanced visibility and more adaptive visual information suited to the work situation, particularly advanced unmanned construction. Toward that end, we propose a method for autonomously controlling multiple environmental cameras. Situations in which the yaw, pitch, and zoom of cameras should be controlled are analyzed. Additionally, we define imaging objects, including the machine, manipulators, and end points; and imaging modes, including tracking, zoom, posture, and trajectory modes. To control each camera simply and effectively, four practical camera roles with different combinations of the imaging objects and modes were defined: overview machine, enlarge end point, posture-manipulator, and trajectory-manipulator. A real-time role assignment system is described for assigning the four camera roles to four out of six cameras suitable for the work situation (e.g., reaching, grasping, transport, and releasing) on the basis of the assignment-priority rules. To test this system, debris-removal tasks were performed in a virtual reality simulation to compare performance among fixed camera, manual control camera, and autonomous control camera systems. The results showed that the autonomous system was the best of the three at decreasing the number of grasping misses and erroneous contacts and simultaneously increasing the subjective usability and time efficiency.

Original languageEnglish
JournalJournal of Field Robotics
DOIs
Publication statusAccepted/In press - 2015

Fingerprint

Visibility
Cameras
Manipulators
Imaging techniques
Trajectories
Manual control
Debris
Virtual reality

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications

Cite this

@article{fba3908a254041e7a67d13b741224d39,
title = "Visibility Enhancement using Autonomous Multicamera Controls with Situational Role Assignment for Teleoperated Work Machines",
abstract = "The aim of this study is to provide a machine operator with enhanced visibility and more adaptive visual information suited to the work situation, particularly advanced unmanned construction. Toward that end, we propose a method for autonomously controlling multiple environmental cameras. Situations in which the yaw, pitch, and zoom of cameras should be controlled are analyzed. Additionally, we define imaging objects, including the machine, manipulators, and end points; and imaging modes, including tracking, zoom, posture, and trajectory modes. To control each camera simply and effectively, four practical camera roles with different combinations of the imaging objects and modes were defined: overview machine, enlarge end point, posture-manipulator, and trajectory-manipulator. A real-time role assignment system is described for assigning the four camera roles to four out of six cameras suitable for the work situation (e.g., reaching, grasping, transport, and releasing) on the basis of the assignment-priority rules. To test this system, debris-removal tasks were performed in a virtual reality simulation to compare performance among fixed camera, manual control camera, and autonomous control camera systems. The results showed that the autonomous system was the best of the three at decreasing the number of grasping misses and erroneous contacts and simultaneously increasing the subjective usability and time efficiency.",
author = "Mitsuhiro Kamezaki and Junjie Yang and Hiroyasu Iwata and Shigeki Sugano",
year = "2015",
doi = "10.1002/rob.21580",
language = "English",
journal = "Journal of Field Robotics",
issn = "1556-4959",
publisher = "John Wiley and Sons Inc.",

}

TY - JOUR

T1 - Visibility Enhancement using Autonomous Multicamera Controls with Situational Role Assignment for Teleoperated Work Machines

AU - Kamezaki, Mitsuhiro

AU - Yang, Junjie

AU - Iwata, Hiroyasu

AU - Sugano, Shigeki

PY - 2015

Y1 - 2015

N2 - The aim of this study is to provide a machine operator with enhanced visibility and more adaptive visual information suited to the work situation, particularly advanced unmanned construction. Toward that end, we propose a method for autonomously controlling multiple environmental cameras. Situations in which the yaw, pitch, and zoom of cameras should be controlled are analyzed. Additionally, we define imaging objects, including the machine, manipulators, and end points; and imaging modes, including tracking, zoom, posture, and trajectory modes. To control each camera simply and effectively, four practical camera roles with different combinations of the imaging objects and modes were defined: overview machine, enlarge end point, posture-manipulator, and trajectory-manipulator. A real-time role assignment system is described for assigning the four camera roles to four out of six cameras suitable for the work situation (e.g., reaching, grasping, transport, and releasing) on the basis of the assignment-priority rules. To test this system, debris-removal tasks were performed in a virtual reality simulation to compare performance among fixed camera, manual control camera, and autonomous control camera systems. The results showed that the autonomous system was the best of the three at decreasing the number of grasping misses and erroneous contacts and simultaneously increasing the subjective usability and time efficiency.

AB - The aim of this study is to provide a machine operator with enhanced visibility and more adaptive visual information suited to the work situation, particularly advanced unmanned construction. Toward that end, we propose a method for autonomously controlling multiple environmental cameras. Situations in which the yaw, pitch, and zoom of cameras should be controlled are analyzed. Additionally, we define imaging objects, including the machine, manipulators, and end points; and imaging modes, including tracking, zoom, posture, and trajectory modes. To control each camera simply and effectively, four practical camera roles with different combinations of the imaging objects and modes were defined: overview machine, enlarge end point, posture-manipulator, and trajectory-manipulator. A real-time role assignment system is described for assigning the four camera roles to four out of six cameras suitable for the work situation (e.g., reaching, grasping, transport, and releasing) on the basis of the assignment-priority rules. To test this system, debris-removal tasks were performed in a virtual reality simulation to compare performance among fixed camera, manual control camera, and autonomous control camera systems. The results showed that the autonomous system was the best of the three at decreasing the number of grasping misses and erroneous contacts and simultaneously increasing the subjective usability and time efficiency.

UR - http://www.scopus.com/inward/record.url?scp=84927546177&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84927546177&partnerID=8YFLogxK

U2 - 10.1002/rob.21580

DO - 10.1002/rob.21580

M3 - Article

AN - SCOPUS:84927546177

JO - Journal of Field Robotics

JF - Journal of Field Robotics

SN - 1556-4959

ER -