Multi-person Conversation via Multi-modal Interface - A Robot who Communicate with Multi-user -

Yosuke Matsusaka*, Tsuyoshi Tojo, Sentaro Kubota, Kenji Furukawa, Daisuke Tamiya, Keisuke Hayata, Yuichiro Nakano, Tetsunori Kobayashi

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

72 Citations (Scopus)

Abstract

This paper describes a robot who converses with multi-person using his multi-modal interface. The multi-person conversation includes many new problems, which are not cared in the conventional one-to-one conversation: such as information flow problems (recognizing who is speaking and to whom he is speaking/appealing to whom the system is speaking), space information sharing problem and turn holder estimation problem (estimating who is the next speaker). We solved these problems by utilizing multi-modal interface: face direction recognition, gesture recognition, sound direction recognition, speech recognition and gestural expression. The systematic combination of these functions realized human friendly multi-person conversation system.

Original languageEnglish
Pages1723-1726
Number of pages4
Publication statusPublished - 1999
Event6th European Conference on Speech Communication and Technology, EUROSPEECH 1999 - Budapest, Hungary
Duration: 1999 Sep 51999 Sep 9

Conference

Conference6th European Conference on Speech Communication and Technology, EUROSPEECH 1999
Country/TerritoryHungary
CityBudapest
Period99/9/599/9/9

ASJC Scopus subject areas

  • Computer Science Applications
  • Software
  • Linguistics and Language
  • Communication

Fingerprint

Dive into the research topics of 'Multi-person Conversation via Multi-modal Interface - A Robot who Communicate with Multi-user -'. Together they form a unique fingerprint.

Cite this