The aim of this study is to provide a machine operator with enhanced visibility and more adaptive visual information suited to the work situation, particularly advanced unmanned construction. Toward that end, we propose a method for autonomously controlling multiple environmental cameras. Situations in which the yaw, pitch, and zoom of cameras should be controlled are analyzed. Additionally, we define imaging objects, including the machine, manipulators, and end points; and imaging modes, including tracking, zoom, posture, and trajectory modes. To control each camera simply and effectively, four practical camera roles with different combinations of the imaging objects and modes were defined: overview machine, enlarge end point, posture-manipulator, and trajectory-manipulator. A real-time role assignment system is described for assigning the four camera roles to four out of six cameras suitable for the work situation (e.g., reaching, grasping, transport, and releasing) on the basis of the assignment-priority rules. To test this system, debris-removal tasks were performed in a virtual reality simulation to compare performance among fixed camera, manual control camera, and autonomous control camera systems. The results showed that the autonomous system was the best of the three at decreasing the number of grasping misses and erroneous contacts and simultaneously increasing the subjective usability and time efficiency.
ASJC Scopus subject areas
- コンピュータ サイエンスの応用