Multi-person Conversation via Multi-modal Interface - A Robot who Communicate with Multi-user -

Yosuke Matsusaka*, Tsuyoshi Tojo, Sentaro Kubota, Kenji Furukawa, Daisuke Tamiya, Keisuke Hayata, Yuichiro Nakano, Tetsunori Kobayashi

*この研究の対応する著者

研究成果: Paper査読

73 被引用数 (Scopus)

抄録

This paper describes a robot who converses with multi-person using his multi-modal interface. The multi-person conversation includes many new problems, which are not cared in the conventional one-to-one conversation: such as information flow problems (recognizing who is speaking and to whom he is speaking/appealing to whom the system is speaking), space information sharing problem and turn holder estimation problem (estimating who is the next speaker). We solved these problems by utilizing multi-modal interface: face direction recognition, gesture recognition, sound direction recognition, speech recognition and gestural expression. The systematic combination of these functions realized human friendly multi-person conversation system.

本文言語English
ページ1723-1726
ページ数4
出版ステータスPublished - 1999
イベント6th European Conference on Speech Communication and Technology, EUROSPEECH 1999 - Budapest, Hungary
継続期間: 1999 9月 51999 9月 9

Conference

Conference6th European Conference on Speech Communication and Technology, EUROSPEECH 1999
国/地域Hungary
CityBudapest
Period99/9/599/9/9

ASJC Scopus subject areas

  • コンピュータ サイエンスの応用
  • ソフトウェア
  • 言語学および言語
  • 通信

フィンガープリント

「Multi-person Conversation via Multi-modal Interface - A Robot who Communicate with Multi-user -」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル