Analysis of effective environmental-camera images using virtual environment for advanced unmanned construction

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Unmanned construction machines are used after disasters. Compared with manned construction, time efficiency is lower because of incomplete visual information, communication delay, and lack of tactile experience. Visual information is the most fundamental items for planning and judgment, however, in current vision systems, even the posture and zoom of cameras are not adjusted. To improve operator's visibility, these parameters must be adjusted in accordance with the work situation. The purpose of this study is thus to analyze effective camera images from some comparison experiments, as a fundamental study of advanced visual support. We first developed a virtual reality simulator to enable experimental conditions to be modified easier. To effectively derive required images, experiments with two different camera positions and systems (fixed cameras and manually controllable cameras) were then conducted. The results indicate that enlarged views to show the manipulator is needed in object grasping and tracking images to show the movement direction of the manipulator is needed in largely end-point movement. The result also confirms that the operational accuracy increases and blind spot rate decreases by using the manual system, compared with fixed system.

Original languageEnglish
Title of host publicationIEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages664-669
Number of pages6
ISBN (Print)9781479957361
DOIs
Publication statusPublished - 2014
Event2014 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2014 - Besancon
Duration: 2014 Jul 82014 Jul 11

Other

Other2014 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2014
CityBesancon
Period14/7/814/7/11

Fingerprint

Virtual reality
Cameras
Manipulators
Visibility
Disasters
Simulators
Experiments
Planning
Communication

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Control and Systems Engineering
  • Computer Science Applications
  • Software

Cite this

Yang, J., Kamezaki, M., Iwata, H., & Sugano, S. (2014). Analysis of effective environmental-camera images using virtual environment for advanced unmanned construction. In IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM (pp. 664-669). [6878155] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/AIM.2014.6878155

Analysis of effective environmental-camera images using virtual environment for advanced unmanned construction. / Yang, Junjie; Kamezaki, Mitsuhiro; Iwata, Hiroyasu; Sugano, Shigeki.

IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM. Institute of Electrical and Electronics Engineers Inc., 2014. p. 664-669 6878155.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yang, J, Kamezaki, M, Iwata, H & Sugano, S 2014, Analysis of effective environmental-camera images using virtual environment for advanced unmanned construction. in IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM., 6878155, Institute of Electrical and Electronics Engineers Inc., pp. 664-669, 2014 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2014, Besancon, 14/7/8. https://doi.org/10.1109/AIM.2014.6878155
Yang J, Kamezaki M, Iwata H, Sugano S. Analysis of effective environmental-camera images using virtual environment for advanced unmanned construction. In IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM. Institute of Electrical and Electronics Engineers Inc. 2014. p. 664-669. 6878155 https://doi.org/10.1109/AIM.2014.6878155
Yang, Junjie ; Kamezaki, Mitsuhiro ; Iwata, Hiroyasu ; Sugano, Shigeki. / Analysis of effective environmental-camera images using virtual environment for advanced unmanned construction. IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM. Institute of Electrical and Electronics Engineers Inc., 2014. pp. 664-669
@inproceedings{affea08be765404f8ff46f500ee516fc,
title = "Analysis of effective environmental-camera images using virtual environment for advanced unmanned construction",
abstract = "Unmanned construction machines are used after disasters. Compared with manned construction, time efficiency is lower because of incomplete visual information, communication delay, and lack of tactile experience. Visual information is the most fundamental items for planning and judgment, however, in current vision systems, even the posture and zoom of cameras are not adjusted. To improve operator's visibility, these parameters must be adjusted in accordance with the work situation. The purpose of this study is thus to analyze effective camera images from some comparison experiments, as a fundamental study of advanced visual support. We first developed a virtual reality simulator to enable experimental conditions to be modified easier. To effectively derive required images, experiments with two different camera positions and systems (fixed cameras and manually controllable cameras) were then conducted. The results indicate that enlarged views to show the manipulator is needed in object grasping and tracking images to show the movement direction of the manipulator is needed in largely end-point movement. The result also confirms that the operational accuracy increases and blind spot rate decreases by using the manual system, compared with fixed system.",
author = "Junjie Yang and Mitsuhiro Kamezaki and Hiroyasu Iwata and Shigeki Sugano",
year = "2014",
doi = "10.1109/AIM.2014.6878155",
language = "English",
isbn = "9781479957361",
pages = "664--669",
booktitle = "IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Analysis of effective environmental-camera images using virtual environment for advanced unmanned construction

AU - Yang, Junjie

AU - Kamezaki, Mitsuhiro

AU - Iwata, Hiroyasu

AU - Sugano, Shigeki

PY - 2014

Y1 - 2014

N2 - Unmanned construction machines are used after disasters. Compared with manned construction, time efficiency is lower because of incomplete visual information, communication delay, and lack of tactile experience. Visual information is the most fundamental items for planning and judgment, however, in current vision systems, even the posture and zoom of cameras are not adjusted. To improve operator's visibility, these parameters must be adjusted in accordance with the work situation. The purpose of this study is thus to analyze effective camera images from some comparison experiments, as a fundamental study of advanced visual support. We first developed a virtual reality simulator to enable experimental conditions to be modified easier. To effectively derive required images, experiments with two different camera positions and systems (fixed cameras and manually controllable cameras) were then conducted. The results indicate that enlarged views to show the manipulator is needed in object grasping and tracking images to show the movement direction of the manipulator is needed in largely end-point movement. The result also confirms that the operational accuracy increases and blind spot rate decreases by using the manual system, compared with fixed system.

AB - Unmanned construction machines are used after disasters. Compared with manned construction, time efficiency is lower because of incomplete visual information, communication delay, and lack of tactile experience. Visual information is the most fundamental items for planning and judgment, however, in current vision systems, even the posture and zoom of cameras are not adjusted. To improve operator's visibility, these parameters must be adjusted in accordance with the work situation. The purpose of this study is thus to analyze effective camera images from some comparison experiments, as a fundamental study of advanced visual support. We first developed a virtual reality simulator to enable experimental conditions to be modified easier. To effectively derive required images, experiments with two different camera positions and systems (fixed cameras and manually controllable cameras) were then conducted. The results indicate that enlarged views to show the manipulator is needed in object grasping and tracking images to show the movement direction of the manipulator is needed in largely end-point movement. The result also confirms that the operational accuracy increases and blind spot rate decreases by using the manual system, compared with fixed system.

UR - http://www.scopus.com/inward/record.url?scp=84906658721&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84906658721&partnerID=8YFLogxK

U2 - 10.1109/AIM.2014.6878155

DO - 10.1109/AIM.2014.6878155

M3 - Conference contribution

SN - 9781479957361

SP - 664

EP - 669

BT - IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM

PB - Institute of Electrical and Electronics Engineers Inc.

ER -