Describing and generating multimodal contents featuring affective lifelike agents with MPML

Mitsuru Ishizuka, Helmut Prendinger

Research output: Contribution to journalArticle

20 Citations (Scopus)

Abstract

In this paper, we provide an overview of our research on multimodal media and contents using embodied lifelike agents. In particular we describe our research centered on MPML (Multimodal Presentation Markup Language). MPML allows people to write and produce multimodal contents easily, and serves as a core for integrating various components and functionalities important for multimodal media. To demonstrate the benefits and usability of MPML in a variety of environments including animated Web, 3D VRML space, mobile phones, and the physical world with a humanoid robot, several versions of MPML have been developed while keeping its basic format. Since emotional behavior of the agent is an important factor for making agents lifelike and for being accepted by people as an attractive and friendly human-computer interaction style, emotion-related functions have been emphasized in MPML. In order to alleviate the workload of authoring the contents, it is also required to endow the agents with a certain level of autonomy. We show some of our approaches towards this end.

Original languageEnglish
Pages (from-to)97-128
Number of pages32
JournalNew Generation Computing
Volume24
Issue number2
DOIs
Publication statusPublished - 2006
Externally publishedYes

Fingerprint

Markup languages
Human computer interaction
Mobile phones
Humanoid Robot
Presentation
Language
Authoring
Robots
Mobile Phone
Usability
Workload

Keywords

  • Affective computing
  • Content description language
  • Emotion
  • Lifelike agent
  • Multimodal contents

ASJC Scopus subject areas

  • Hardware and Architecture
  • Theoretical Computer Science
  • Computational Theory and Mathematics

Cite this

Describing and generating multimodal contents featuring affective lifelike agents with MPML. / Ishizuka, Mitsuru; Prendinger, Helmut.

In: New Generation Computing, Vol. 24, No. 2, 2006, p. 97-128.

Research output: Contribution to journalArticle

Ishizuka, Mitsuru ; Prendinger, Helmut. / Describing and generating multimodal contents featuring affective lifelike agents with MPML. In: New Generation Computing. 2006 ; Vol. 24, No. 2. pp. 97-128.
@article{864de9af5eb4431c93ee81c2b0aa40ff,
title = "Describing and generating multimodal contents featuring affective lifelike agents with MPML",
abstract = "In this paper, we provide an overview of our research on multimodal media and contents using embodied lifelike agents. In particular we describe our research centered on MPML (Multimodal Presentation Markup Language). MPML allows people to write and produce multimodal contents easily, and serves as a core for integrating various components and functionalities important for multimodal media. To demonstrate the benefits and usability of MPML in a variety of environments including animated Web, 3D VRML space, mobile phones, and the physical world with a humanoid robot, several versions of MPML have been developed while keeping its basic format. Since emotional behavior of the agent is an important factor for making agents lifelike and for being accepted by people as an attractive and friendly human-computer interaction style, emotion-related functions have been emphasized in MPML. In order to alleviate the workload of authoring the contents, it is also required to endow the agents with a certain level of autonomy. We show some of our approaches towards this end.",
keywords = "Affective computing, Content description language, Emotion, Lifelike agent, Multimodal contents",
author = "Mitsuru Ishizuka and Helmut Prendinger",
year = "2006",
doi = "10.1007/BF03037295",
language = "English",
volume = "24",
pages = "97--128",
journal = "New Generation Computing",
issn = "0288-3635",
publisher = "Springer Japan",
number = "2",

}

TY - JOUR

T1 - Describing and generating multimodal contents featuring affective lifelike agents with MPML

AU - Ishizuka, Mitsuru

AU - Prendinger, Helmut

PY - 2006

Y1 - 2006

N2 - In this paper, we provide an overview of our research on multimodal media and contents using embodied lifelike agents. In particular we describe our research centered on MPML (Multimodal Presentation Markup Language). MPML allows people to write and produce multimodal contents easily, and serves as a core for integrating various components and functionalities important for multimodal media. To demonstrate the benefits and usability of MPML in a variety of environments including animated Web, 3D VRML space, mobile phones, and the physical world with a humanoid robot, several versions of MPML have been developed while keeping its basic format. Since emotional behavior of the agent is an important factor for making agents lifelike and for being accepted by people as an attractive and friendly human-computer interaction style, emotion-related functions have been emphasized in MPML. In order to alleviate the workload of authoring the contents, it is also required to endow the agents with a certain level of autonomy. We show some of our approaches towards this end.

AB - In this paper, we provide an overview of our research on multimodal media and contents using embodied lifelike agents. In particular we describe our research centered on MPML (Multimodal Presentation Markup Language). MPML allows people to write and produce multimodal contents easily, and serves as a core for integrating various components and functionalities important for multimodal media. To demonstrate the benefits and usability of MPML in a variety of environments including animated Web, 3D VRML space, mobile phones, and the physical world with a humanoid robot, several versions of MPML have been developed while keeping its basic format. Since emotional behavior of the agent is an important factor for making agents lifelike and for being accepted by people as an attractive and friendly human-computer interaction style, emotion-related functions have been emphasized in MPML. In order to alleviate the workload of authoring the contents, it is also required to endow the agents with a certain level of autonomy. We show some of our approaches towards this end.

KW - Affective computing

KW - Content description language

KW - Emotion

KW - Lifelike agent

KW - Multimodal contents

UR - http://www.scopus.com/inward/record.url?scp=33749580833&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33749580833&partnerID=8YFLogxK

U2 - 10.1007/BF03037295

DO - 10.1007/BF03037295

M3 - Article

AN - SCOPUS:33749580833

VL - 24

SP - 97

EP - 128

JO - New Generation Computing

JF - New Generation Computing

SN - 0288-3635

IS - 2

ER -