Gaze recognition for conversation robot is realized and its effectiveness is confirmed. In human conversation, in addition to speech information, visual information plays important role. Particularly, gaze direction is a very useful prompt for turntaking. In the case that the speaker finish his utterance, for example, if he looks at the listener, then he expect the listener to speak. On the other hand, if the speaker does not look at the listener, he tries to keep his turn. Most conventional spoken dialogue systems detect the finish of user's turn only by speech recognition. These systems cannot understand the user tries to keep his turn, and they wrongly begin the utterance and block the user's remaining utterance. In this study, we implement the gaze recognition using the user's image captured by the camera mounted on the eye of the robot and apply the recognition results to decide who should speak next. For gaze recognition, we introduce the sub-image of user's eye region extracted with the Active Appearance Model as the feature. Recognition with subspace method using this feature achieved 70% in recognition rate. Finally, the effectiveness of the gaze recognition is confirmed through the subjective experiment. The experiment is performed by the actual conversation between the conversation robot and the subject.