Automatic generation of gaze and gestures for dialogues between embodied conversational agents: System description and study on gaze behavior

Werner Breitfuss*, Helmut Prendinger, Mitsuru Ishizuka

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents'gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming. In order to test the quality of gaze generation, we conducted an empirical study. The results showed that by using our system, the naturalness of the agents'behavior was not increased when compared to randomly 1 selected gaze behavior, but the quality of the communication2 between the two agents was perceived as significantly enhanced.

Original languageEnglish
Title of host publicationAISB 2008 Convention: Communication, Interaction and Social Intelligence - Proceedings of the AISB 2008 Symposium on Multimodal Output Generation, MOG 2008
Pages18-25
Number of pages8
Publication statusPublished - 2008
Externally publishedYes
EventAISB 2008 Symposium on Multimodal Output Generation, MOG 2008 - Aberdeen
Duration: 2008 Apr 12008 Apr 4

Other

OtherAISB 2008 Symposium on Multimodal Output Generation, MOG 2008
CityAberdeen
Period08/4/108/4/4

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Automatic generation of gaze and gestures for dialogues between embodied conversational agents: System description and study on gaze behavior'. Together they form a unique fingerprint.

Cite this