Abstract
In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents'gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming. In order to test the quality of gaze generation, we conducted an empirical study. The results showed that by using our system, the naturalness of the agents'behavior was not increased when compared to randomly 1 selected gaze behavior, but the quality of the communication2 between the two agents was perceived as significantly enhanced.
Original language | English |
---|---|
Title of host publication | AISB 2008 Convention: Communication, Interaction and Social Intelligence - Proceedings of the AISB 2008 Symposium on Multimodal Output Generation, MOG 2008 |
Pages | 18-25 |
Number of pages | 8 |
Publication status | Published - 2008 |
Externally published | Yes |
Event | AISB 2008 Symposium on Multimodal Output Generation, MOG 2008 - Aberdeen Duration: 2008 Apr 1 → 2008 Apr 4 |
Other
Other | AISB 2008 Symposium on Multimodal Output Generation, MOG 2008 |
---|---|
City | Aberdeen |
Period | 08/4/1 → 08/4/4 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Human-Computer Interaction