For building a disaster response robot, WAREC-l's fully-automated system for manipulating a valve, this paper proposes a method for (1) detecting a valve which is far away from the robot, (2) estimating the position and orientation for grasping the valve by the robot at a closer position. Our methods do not need any prior information about a valve for the above-mentioned detection and estimation for grasping. In addition, our estimation for grasping provides useful information, by which WAREC-I can rotate a valve autonomously. The method (1) uses the RGB image and the point cloud data captured by Multisense SL as the input, and estimate the position and orientation of a valve far away from the robot. The method (2) uses both the RGB and depth images captured by KinectV2 as input and estimate information for grasping the valve. Our experiments are conducted using a real disaster response robot. Our experimental results show the error of the estimation by the (a) two methods are small enough to achieve a fully-automated system for detecting and rotating the valve by WAREC-I.