Automatic generation of conversational behavior for multiple embodied virtual characters: The rules and models behind our system

Werner Breitfuss, Helmut Prendinger, Mitsuru Ishizuka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper we presented the rules and algorithms we use to automatically generate non-verbal behavior like gestures and gaze for two embodied virtual agents. They allow us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. Since all behaviors are generated automatically, our system offers content creators a convenient method to compose multimodal presentations, a task that would otherwise be very cumbersome and time consuming.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages472-473
Number of pages2
Volume5208 LNAI
DOIs
Publication statusPublished - 2008
Externally publishedYes
Event8th International Conference on Intelligent Virtual Agents, IVA 2008 - Tokyo
Duration: 2008 Sep 12008 Sep 3

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume5208 LNAI
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other8th International Conference on Intelligent Virtual Agents, IVA 2008
CityTokyo
Period08/9/108/9/3

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Fingerprint Dive into the research topics of 'Automatic generation of conversational behavior for multiple embodied virtual characters: The rules and models behind our system'. Together they form a unique fingerprint.

Cite this