Robot motion control using listener's back-channels and head gesture information

Tsuyoshi Tasaki, Takeshi Yamaguchi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. "Back-channels" are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and "head gestures" are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.

Original languageEnglish
Title of host publication8th International Conference on Spoken Language Processing, ICSLP 2004
PublisherInternational Speech Communication Association
Pages1033-1036
Number of pages4
Publication statusPublished - 2004
Externally publishedYes
Event8th International Conference on Spoken Language Processing, ICSLP 2004 - Jeju, Jeju Island, Korea, Republic of
Duration: 2004 Oct 42004 Oct 8

Other

Other8th International Conference on Spoken Language Processing, ICSLP 2004
CountryKorea, Republic of
CityJeju, Jeju Island
Period04/10/404/10/8

Fingerprint

robot
listener
dialogue
Listeners
Robot
Backchannel
Gesture
experiment
Sound

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

Cite this

Tasaki, T., Yamaguchi, T., Komatani, K., Ogata, T., & Okuno, H. G. (2004). Robot motion control using listener's back-channels and head gesture information. In 8th International Conference on Spoken Language Processing, ICSLP 2004 (pp. 1033-1036). International Speech Communication Association.

Robot motion control using listener's back-channels and head gesture information. / Tasaki, Tsuyoshi; Yamaguchi, Takeshi; Komatani, Kazunori; Ogata, Tetsuya; Okuno, Hiroshi G.

8th International Conference on Spoken Language Processing, ICSLP 2004. International Speech Communication Association, 2004. p. 1033-1036.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Tasaki, T, Yamaguchi, T, Komatani, K, Ogata, T & Okuno, HG 2004, Robot motion control using listener's back-channels and head gesture information. in 8th International Conference on Spoken Language Processing, ICSLP 2004. International Speech Communication Association, pp. 1033-1036, 8th International Conference on Spoken Language Processing, ICSLP 2004, Jeju, Jeju Island, Korea, Republic of, 04/10/4.
Tasaki T, Yamaguchi T, Komatani K, Ogata T, Okuno HG. Robot motion control using listener's back-channels and head gesture information. In 8th International Conference on Spoken Language Processing, ICSLP 2004. International Speech Communication Association. 2004. p. 1033-1036
Tasaki, Tsuyoshi ; Yamaguchi, Takeshi ; Komatani, Kazunori ; Ogata, Tetsuya ; Okuno, Hiroshi G. / Robot motion control using listener's back-channels and head gesture information. 8th International Conference on Spoken Language Processing, ICSLP 2004. International Speech Communication Association, 2004. pp. 1033-1036
@inproceedings{d2e9b3a0423f4e278ba29cd760ce2f58,
title = "Robot motion control using listener's back-channels and head gesture information",
abstract = "A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. {"}Back-channels{"} are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and {"}head gestures{"} are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.",
author = "Tsuyoshi Tasaki and Takeshi Yamaguchi and Kazunori Komatani and Tetsuya Ogata and Okuno, {Hiroshi G.}",
year = "2004",
language = "English",
pages = "1033--1036",
booktitle = "8th International Conference on Spoken Language Processing, ICSLP 2004",
publisher = "International Speech Communication Association",

}

TY - GEN

T1 - Robot motion control using listener's back-channels and head gesture information

AU - Tasaki, Tsuyoshi

AU - Yamaguchi, Takeshi

AU - Komatani, Kazunori

AU - Ogata, Tetsuya

AU - Okuno, Hiroshi G.

PY - 2004

Y1 - 2004

N2 - A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. "Back-channels" are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and "head gestures" are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.

AB - A novel method is described for robot gestures and utterances during a dialogue based on the listener's understanding and interest, which are recognized from back-channels and head gestures. "Back-channels" are defined as sounds like 'uh-huh' uttered by a listener during a dialogue, and "head gestures" are defined as nod and tilt motions of the listener's head. The back-channels are recognized using sound features such as power and fundamental frequency. The head gestures are recognized using the movement of the skin-color area and the optical flow data. Based on the estimated understanding and interest of the listener, the speed and size of robot motions are changed. This method was implemented in a humanoid robot called SIG2. Experiments with six participants demonstrated that the proposed method enabled the robot to increase the listener's level of interest against the dialogue.

UR - http://www.scopus.com/inward/record.url?scp=85009084208&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85009084208&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85009084208

SP - 1033

EP - 1036

BT - 8th International Conference on Spoken Language Processing, ICSLP 2004

PB - International Speech Communication Association

ER -