Unmanned construction machines are used after disasters. Compared with manned construction, time efficiency is lower because of incomplete visual information, communication delay, and lack of tactile experience. Visual information is the most fundamental items for planning and judgment, however, in current vision systems, even the posture and zoom of cameras are not adjusted. To improve operator's visibility, these parameters must be adjusted in accordance with the work situation. The purpose of this study is thus to analyze effective camera images from some comparison experiments, as a fundamental study of advanced visual support. We first developed a virtual reality simulator to enable experimental conditions to be modified easier. To effectively derive required images, experiments with two different camera positions and systems (fixed cameras and manually controllable cameras) were then conducted. The results indicate that enlarged views to show the manipulator is needed in object grasping and tracking images to show the movement direction of the manipulator is needed in largely end-point movement. The result also confirms that the operational accuracy increases and blind spot rate decreases by using the manual system, compared with fixed system.