Multi-modal integration for personalized conversation: Towards a humanoid in daily life

Shinya Fujie, Daichi Watanabe, Yuhi Ichikawa, Hikaru Taniyama, Kosuke Hosoya, Yoichi Matsuyama, Tetsunori Kobayashi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Humanoid with spoken language communication ability is proposed and developed. To make humanoid live with people, spoken language communication is fundamental because we use this kind of communication everyday. However, due to difficulties of speech recognition itself and implementation on the robot, a robot with such an ability has not been developed. In this study, we propose a robot with the technique implemented to overcome these problems. This proposed system includes three key features, image processing, sound source separation, and turn-taking timing control. Processing image captured with camera mounted on the robot's eyes enables to find and identify whom the robot should talked to. Sound source separation enables distant speech recognition, so that people need no special device, such as head-set microphones. Turn-taking timing control is often lacked in many conventional spoken dialogue system, but this is fundamental because the conversation proceeds in realtime. The effectiveness of these elements as well as the example of conversation are shown in experiments.

Original languageEnglish
Title of host publication2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008
Pages617-622
Number of pages6
DOIs
Publication statusPublished - 2008 Dec 1
Event2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008 - Daejeon, Korea, Republic of
Duration: 2008 Dec 12008 Dec 3

Publication series

Name2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008

Conference

Conference2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008
CountryKorea, Republic of
CityDaejeon
Period08/12/108/12/3

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction

Fingerprint Dive into the research topics of 'Multi-modal integration for personalized conversation: Towards a humanoid in daily life'. Together they form a unique fingerprint.

  • Cite this

    Fujie, S., Watanabe, D., Ichikawa, Y., Taniyama, H., Hosoya, K., Matsuyama, Y., & Kobayashi, T. (2008). Multi-modal integration for personalized conversation: Towards a humanoid in daily life. In 2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008 (pp. 617-622). [4756014] (2008 8th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2008). https://doi.org/10.1109/ICHR.2008.4756014