A method To autonomously control multiple environmental cameras, which are currently fixed, for providing more adaptive visual information suited To The work situation for advanced unmanned construction is proposed. Situations in which The yaw, pitch, and zoom of cameras should be controlled were analyzed and imaging objects including The machine, manipulator, and end-point and imaging modes including Tracking, zoom, posture, and Trajectory modes were defined. To control each camera simply and effectively, four practical camera roles combined with The imaging objects and modes were defined as The overview-machine, enlarge-end-point, posture-manipulator, and Trajectory-manipulator. A role assignment system was Then developed To assign The four camera roles To four out of six cameras suitable for The work situation, e.g., reaching, grasping, Transport, and releasing, on The basis of The assignment priority rules, in The real Time. Debris removal Tasks were performed by using a VR simulator To compare fixed camera, manual control, and autonomous systems. Results showed That The autonomous system was The best of The Three at decreasing The number of grasping misses and error contacts and increasing The subjective usability while improving The Time efficiency.