Robot motion control using listener's back-channels and head gesture information

Tsuyoshi Tasaki, Takeshi Yamaguchi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno

研究成果: Conference contribution

抄録

A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. "Back-channels" are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and "head gestures" are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.

本文言語English
ホスト出版物のタイトル8th International Conference on Spoken Language Processing, ICSLP 2004
出版社International Speech Communication Association
ページ1033-1036
ページ数4
出版ステータスPublished - 2004
外部発表はい
イベント8th International Conference on Spoken Language Processing, ICSLP 2004 - Jeju, Jeju Island, Korea, Republic of
継続期間: 2004 10 42004 10 8

Other

Other8th International Conference on Spoken Language Processing, ICSLP 2004
CountryKorea, Republic of
CityJeju, Jeju Island
Period04/10/404/10/8

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

フィンガープリント 「Robot motion control using listener's back-channels and head gesture information」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル