Automatic generation of non-verbal behavior for agents in virtual worlds

A system for supporting multimodal conversations of bots and avatars

Werner Breitfuss, Helmut Prendinger, Mitsuru Ishizuka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

This paper presents a system capable of automatically adding gestures to an embodied virtual character processing information from a simple text input. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The system is embedded in the virtual world called second life and consists of an in world object and an off world server component that handles the analysis. Either a user controlled avatar or a non user controlled character can be used to display the gestures, that are timed with speech output from an Text-to-Speech system, and so show non verbal behavior without pushing the user to manually select it.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages153-161
Number of pages9
Volume5621 LNCS
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event3rd International Conference on Online Communities and Social Computing, OCSC 2009. Held as Part of HCI International 2009 - San Diego, CA
Duration: 2009 Jul 192009 Jul 24

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume5621 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other3rd International Conference on Online Communities and Social Computing, OCSC 2009. Held as Part of HCI International 2009
CitySan Diego, CA
Period09/7/1909/7/24

Fingerprint

Avatar
Virtual Worlds
Gesture
Linguistics
Embedded systems
Second Life
Text-to-speech
Virtual Characters
Servers
Information Processing
Server
Output
Text

Keywords

  • Animated Agent Systems
  • Embodied Virtual Characters
  • Multimodal Output Generation
  • Multimodal Presentations
  • Virtual Worlds

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Breitfuss, W., Prendinger, H., & Ishizuka, M. (2009). Automatic generation of non-verbal behavior for agents in virtual worlds: A system for supporting multimodal conversations of bots and avatars. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5621 LNCS, pp. 153-161). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 5621 LNCS). https://doi.org/10.1007/978-3-642-02774-1_17

Automatic generation of non-verbal behavior for agents in virtual worlds : A system for supporting multimodal conversations of bots and avatars. / Breitfuss, Werner; Prendinger, Helmut; Ishizuka, Mitsuru.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 5621 LNCS 2009. p. 153-161 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 5621 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Breitfuss, W, Prendinger, H & Ishizuka, M 2009, Automatic generation of non-verbal behavior for agents in virtual worlds: A system for supporting multimodal conversations of bots and avatars. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 5621 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 5621 LNCS, pp. 153-161, 3rd International Conference on Online Communities and Social Computing, OCSC 2009. Held as Part of HCI International 2009, San Diego, CA, 09/7/19. https://doi.org/10.1007/978-3-642-02774-1_17
Breitfuss W, Prendinger H, Ishizuka M. Automatic generation of non-verbal behavior for agents in virtual worlds: A system for supporting multimodal conversations of bots and avatars. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 5621 LNCS. 2009. p. 153-161. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-642-02774-1_17
Breitfuss, Werner ; Prendinger, Helmut ; Ishizuka, Mitsuru. / Automatic generation of non-verbal behavior for agents in virtual worlds : A system for supporting multimodal conversations of bots and avatars. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 5621 LNCS 2009. pp. 153-161 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{1d88281a02ab41f1bfdd2ed8936dcfe3,
title = "Automatic generation of non-verbal behavior for agents in virtual worlds: A system for supporting multimodal conversations of bots and avatars",
abstract = "This paper presents a system capable of automatically adding gestures to an embodied virtual character processing information from a simple text input. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The system is embedded in the virtual world called second life and consists of an in world object and an off world server component that handles the analysis. Either a user controlled avatar or a non user controlled character can be used to display the gestures, that are timed with speech output from an Text-to-Speech system, and so show non verbal behavior without pushing the user to manually select it.",
keywords = "Animated Agent Systems, Embodied Virtual Characters, Multimodal Output Generation, Multimodal Presentations, Virtual Worlds",
author = "Werner Breitfuss and Helmut Prendinger and Mitsuru Ishizuka",
year = "2009",
doi = "10.1007/978-3-642-02774-1_17",
language = "English",
isbn = "3642027733",
volume = "5621 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "153--161",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - Automatic generation of non-verbal behavior for agents in virtual worlds

T2 - A system for supporting multimodal conversations of bots and avatars

AU - Breitfuss, Werner

AU - Prendinger, Helmut

AU - Ishizuka, Mitsuru

PY - 2009

Y1 - 2009

N2 - This paper presents a system capable of automatically adding gestures to an embodied virtual character processing information from a simple text input. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The system is embedded in the virtual world called second life and consists of an in world object and an off world server component that handles the analysis. Either a user controlled avatar or a non user controlled character can be used to display the gestures, that are timed with speech output from an Text-to-Speech system, and so show non verbal behavior without pushing the user to manually select it.

AB - This paper presents a system capable of automatically adding gestures to an embodied virtual character processing information from a simple text input. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The system is embedded in the virtual world called second life and consists of an in world object and an off world server component that handles the analysis. Either a user controlled avatar or a non user controlled character can be used to display the gestures, that are timed with speech output from an Text-to-Speech system, and so show non verbal behavior without pushing the user to manually select it.

KW - Animated Agent Systems

KW - Embodied Virtual Characters

KW - Multimodal Output Generation

KW - Multimodal Presentations

KW - Virtual Worlds

UR - http://www.scopus.com/inward/record.url?scp=76649116519&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=76649116519&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-02774-1_17

DO - 10.1007/978-3-642-02774-1_17

M3 - Conference contribution

SN - 3642027733

SN - 9783642027734

VL - 5621 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 153

EP - 161

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

ER -