Robot motion control using listener's back-channels and head gesture information

Tsuyoshi Tasaki, Takeshi Yamaguchi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno

研究成果: Conference contribution

抄録

A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. "Back-channels" are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and "head gestures" are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.

元の言語English
ホスト出版物のタイトル8th International Conference on Spoken Language Processing, ICSLP 2004
出版者International Speech Communication Association
ページ1033-1036
ページ数4
出版物ステータスPublished - 2004
外部発表Yes
イベント8th International Conference on Spoken Language Processing, ICSLP 2004 - Jeju, Jeju Island, Korea, Republic of
継続期間: 2004 10 42004 10 8

Other

Other8th International Conference on Spoken Language Processing, ICSLP 2004
Korea, Republic of
Jeju, Jeju Island
期間04/10/404/10/8

Fingerprint

robot
listener
dialogue
Listeners
Robot
Backchannel
Gesture
experiment
Sound

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

これを引用

Tasaki, T., Yamaguchi, T., Komatani, K., Ogata, T., & Okuno, H. G. (2004). Robot motion control using listener's back-channels and head gesture information. : 8th International Conference on Spoken Language Processing, ICSLP 2004 (pp. 1033-1036). International Speech Communication Association.

Robot motion control using listener's back-channels and head gesture information. / Tasaki, Tsuyoshi; Yamaguchi, Takeshi; Komatani, Kazunori; Ogata, Tetsuya; Okuno, Hiroshi G.

8th International Conference on Spoken Language Processing, ICSLP 2004. International Speech Communication Association, 2004. p. 1033-1036.

研究成果: Conference contribution

Tasaki, T, Yamaguchi, T, Komatani, K, Ogata, T & Okuno, HG 2004, Robot motion control using listener's back-channels and head gesture information. : 8th International Conference on Spoken Language Processing, ICSLP 2004. International Speech Communication Association, pp. 1033-1036, 8th International Conference on Spoken Language Processing, ICSLP 2004, Jeju, Jeju Island, Korea, Republic of, 04/10/4.
Tasaki T, Yamaguchi T, Komatani K, Ogata T, Okuno HG. Robot motion control using listener's back-channels and head gesture information. : 8th International Conference on Spoken Language Processing, ICSLP 2004. International Speech Communication Association. 2004. p. 1033-1036
Tasaki, Tsuyoshi ; Yamaguchi, Takeshi ; Komatani, Kazunori ; Ogata, Tetsuya ; Okuno, Hiroshi G. / Robot motion control using listener's back-channels and head gesture information. 8th International Conference on Spoken Language Processing, ICSLP 2004. International Speech Communication Association, 2004. pp. 1033-1036
@inproceedings{d2e9b3a0423f4e278ba29cd760ce2f58,
title = "Robot motion control using listener's back-channels and head gesture information",
abstract = "A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. {"}Back-channels{"} are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and {"}head gestures{"} are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.",
author = "Tsuyoshi Tasaki and Takeshi Yamaguchi and Kazunori Komatani and Tetsuya Ogata and Okuno, {Hiroshi G.}",
year = "2004",
language = "English",
pages = "1033--1036",
booktitle = "8th International Conference on Spoken Language Processing, ICSLP 2004",
publisher = "International Speech Communication Association",

}

TY - GEN

T1 - Robot motion control using listener's back-channels and head gesture information

AU - Tasaki, Tsuyoshi

AU - Yamaguchi, Takeshi

AU - Komatani, Kazunori

AU - Ogata, Tetsuya

AU - Okuno, Hiroshi G.

PY - 2004

Y1 - 2004

N2 - A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. "Back-channels" are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and "head gestures" are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.

AB - A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. "Back-channels" are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and "head gestures" are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.

UR - http://www.scopus.com/inward/record.url?scp=85009084208&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85009084208&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85009084208

SP - 1033

EP - 1036

BT - 8th International Conference on Spoken Language Processing, ICSLP 2004

PB - International Speech Communication Association

ER -