Automated generation of non-verbal behavior for virtual embodied characters

Werner Breitfuss, Helmut Prendinger, Mitsuru Ishizuka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

16 Citations (Scopus)

Abstract

In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming.

Original languageEnglish
Title of host publicationProceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07
Pages319-322
Number of pages4
DOIs
Publication statusPublished - 2007
Externally publishedYes
Event9th International Conference on Multimodal Interfaces, ICMI 2007 - Nagoya
Duration: 2007 Nov 122007 Nov 15

Other

Other9th International Conference on Multimodal Interfaces, ICMI 2007
CityNagoya
Period07/11/1207/11/15

Fingerprint

Markup languages
Animation
Linguistics

Keywords

  • Animation agent systems
  • Multi-modal presentation
  • Multimodal input and output interfaces
  • Processing of language and action patterns

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction

Cite this

Breitfuss, W., Prendinger, H., & Ishizuka, M. (2007). Automated generation of non-verbal behavior for virtual embodied characters. In Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07 (pp. 319-322) https://doi.org/10.1145/1322192.1322247

Automated generation of non-verbal behavior for virtual embodied characters. / Breitfuss, Werner; Prendinger, Helmut; Ishizuka, Mitsuru.

Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07. 2007. p. 319-322.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Breitfuss, W, Prendinger, H & Ishizuka, M 2007, Automated generation of non-verbal behavior for virtual embodied characters. in Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07. pp. 319-322, 9th International Conference on Multimodal Interfaces, ICMI 2007, Nagoya, 07/11/12. https://doi.org/10.1145/1322192.1322247
Breitfuss W, Prendinger H, Ishizuka M. Automated generation of non-verbal behavior for virtual embodied characters. In Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07. 2007. p. 319-322 https://doi.org/10.1145/1322192.1322247
Breitfuss, Werner ; Prendinger, Helmut ; Ishizuka, Mitsuru. / Automated generation of non-verbal behavior for virtual embodied characters. Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07. 2007. pp. 319-322
@inproceedings{cceda64ae7cf4a72b20b59633c72e586,
title = "Automated generation of non-verbal behavior for virtual embodied characters",
abstract = "In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming.",
keywords = "Animation agent systems, Multi-modal presentation, Multimodal input and output interfaces, Processing of language and action patterns",
author = "Werner Breitfuss and Helmut Prendinger and Mitsuru Ishizuka",
year = "2007",
doi = "10.1145/1322192.1322247",
language = "English",
isbn = "9781595938176",
pages = "319--322",
booktitle = "Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07",

}

TY - GEN

T1 - Automated generation of non-verbal behavior for virtual embodied characters

AU - Breitfuss, Werner

AU - Prendinger, Helmut

AU - Ishizuka, Mitsuru

PY - 2007

Y1 - 2007

N2 - In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming.

AB - In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming.

KW - Animation agent systems

KW - Multi-modal presentation

KW - Multimodal input and output interfaces

KW - Processing of language and action patterns

UR - http://www.scopus.com/inward/record.url?scp=52149120828&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=52149120828&partnerID=8YFLogxK

U2 - 10.1145/1322192.1322247

DO - 10.1145/1322192.1322247

M3 - Conference contribution

AN - SCOPUS:52149120828

SN - 9781595938176

SP - 319

EP - 322

BT - Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07

ER -