Conversation robot with the function of gaze recognition

Shinya Fujie, Toshihiko Yamahata, Tetsunori Kobayashi

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    5 Citations (Scopus)

    Abstract

    Gaze recognition for conversation robot is realized and its effectiveness is confirmed. In human conversation, in addition to speech information, visual information plays important role. Particularly, gaze direction is a very useful prompt for turntaking. In the case that the speaker finish his utterance, for example, if he looks at the listener, then he expect the listener to speak. On the other hand, if the speaker does not look at the listener, he tries to keep his turn. Most conventional spoken dialogue systems detect the finish of user's turn only by speech recognition. These systems cannot understand the user tries to keep his turn, and they wrongly begin the utterance and block the user's remaining utterance. In this study, we implement the gaze recognition using the user's image captured by the camera mounted on the eye of the robot and apply the recognition results to decide who should speak next. For gaze recognition, we introduce the sub-image of user's eye region extracted with the Active Appearance Model as the feature. Recognition with subspace method using this feature achieved 70% in recognition rate. Finally, the effectiveness of the gaze recognition is confirmed through the subjective experiment. The experiment is performed by the actual conversation between the conversation robot and the subject.

    Original languageEnglish
    Title of host publicationProceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS
    Pages364-369
    Number of pages6
    DOIs
    Publication statusPublished - 2006
    Event2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS - Genoa
    Duration: 2006 Dec 42006 Dec 6

    Other

    Other2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS
    CityGenoa
    Period06/12/406/12/6

    Fingerprint

    Robots
    Speech recognition
    Experiments
    Cameras

    ASJC Scopus subject areas

    • Human-Computer Interaction
    • Electrical and Electronic Engineering

    Cite this

    Fujie, S., Yamahata, T., & Kobayashi, T. (2006). Conversation robot with the function of gaze recognition. In Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS (pp. 364-369). [4115628] https://doi.org/10.1109/ICHR.2006.321298

    Conversation robot with the function of gaze recognition. / Fujie, Shinya; Yamahata, Toshihiko; Kobayashi, Tetsunori.

    Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS. 2006. p. 364-369 4115628.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Fujie, S, Yamahata, T & Kobayashi, T 2006, Conversation robot with the function of gaze recognition. in Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS., 4115628, pp. 364-369, 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS, Genoa, 06/12/4. https://doi.org/10.1109/ICHR.2006.321298
    Fujie S, Yamahata T, Kobayashi T. Conversation robot with the function of gaze recognition. In Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS. 2006. p. 364-369. 4115628 https://doi.org/10.1109/ICHR.2006.321298
    Fujie, Shinya ; Yamahata, Toshihiko ; Kobayashi, Tetsunori. / Conversation robot with the function of gaze recognition. Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS. 2006. pp. 364-369
    @inproceedings{322ad56ec42447b6b4c4e89a12bbc159,
    title = "Conversation robot with the function of gaze recognition",
    abstract = "Gaze recognition for conversation robot is realized and its effectiveness is confirmed. In human conversation, in addition to speech information, visual information plays important role. Particularly, gaze direction is a very useful prompt for turntaking. In the case that the speaker finish his utterance, for example, if he looks at the listener, then he expect the listener to speak. On the other hand, if the speaker does not look at the listener, he tries to keep his turn. Most conventional spoken dialogue systems detect the finish of user's turn only by speech recognition. These systems cannot understand the user tries to keep his turn, and they wrongly begin the utterance and block the user's remaining utterance. In this study, we implement the gaze recognition using the user's image captured by the camera mounted on the eye of the robot and apply the recognition results to decide who should speak next. For gaze recognition, we introduce the sub-image of user's eye region extracted with the Active Appearance Model as the feature. Recognition with subspace method using this feature achieved 70{\%} in recognition rate. Finally, the effectiveness of the gaze recognition is confirmed through the subjective experiment. The experiment is performed by the actual conversation between the conversation robot and the subject.",
    author = "Shinya Fujie and Toshihiko Yamahata and Tetsunori Kobayashi",
    year = "2006",
    doi = "10.1109/ICHR.2006.321298",
    language = "English",
    isbn = "142440200X",
    pages = "364--369",
    booktitle = "Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS",

    }

    TY - GEN

    T1 - Conversation robot with the function of gaze recognition

    AU - Fujie, Shinya

    AU - Yamahata, Toshihiko

    AU - Kobayashi, Tetsunori

    PY - 2006

    Y1 - 2006

    N2 - Gaze recognition for conversation robot is realized and its effectiveness is confirmed. In human conversation, in addition to speech information, visual information plays important role. Particularly, gaze direction is a very useful prompt for turntaking. In the case that the speaker finish his utterance, for example, if he looks at the listener, then he expect the listener to speak. On the other hand, if the speaker does not look at the listener, he tries to keep his turn. Most conventional spoken dialogue systems detect the finish of user's turn only by speech recognition. These systems cannot understand the user tries to keep his turn, and they wrongly begin the utterance and block the user's remaining utterance. In this study, we implement the gaze recognition using the user's image captured by the camera mounted on the eye of the robot and apply the recognition results to decide who should speak next. For gaze recognition, we introduce the sub-image of user's eye region extracted with the Active Appearance Model as the feature. Recognition with subspace method using this feature achieved 70% in recognition rate. Finally, the effectiveness of the gaze recognition is confirmed through the subjective experiment. The experiment is performed by the actual conversation between the conversation robot and the subject.

    AB - Gaze recognition for conversation robot is realized and its effectiveness is confirmed. In human conversation, in addition to speech information, visual information plays important role. Particularly, gaze direction is a very useful prompt for turntaking. In the case that the speaker finish his utterance, for example, if he looks at the listener, then he expect the listener to speak. On the other hand, if the speaker does not look at the listener, he tries to keep his turn. Most conventional spoken dialogue systems detect the finish of user's turn only by speech recognition. These systems cannot understand the user tries to keep his turn, and they wrongly begin the utterance and block the user's remaining utterance. In this study, we implement the gaze recognition using the user's image captured by the camera mounted on the eye of the robot and apply the recognition results to decide who should speak next. For gaze recognition, we introduce the sub-image of user's eye region extracted with the Active Appearance Model as the feature. Recognition with subspace method using this feature achieved 70% in recognition rate. Finally, the effectiveness of the gaze recognition is confirmed through the subjective experiment. The experiment is performed by the actual conversation between the conversation robot and the subject.

    UR - http://www.scopus.com/inward/record.url?scp=48149104819&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=48149104819&partnerID=8YFLogxK

    U2 - 10.1109/ICHR.2006.321298

    DO - 10.1109/ICHR.2006.321298

    M3 - Conference contribution

    SN - 142440200X

    SN - 9781424402007

    SP - 364

    EP - 369

    BT - Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS

    ER -