Describing and generating multimodal contents featuring affective lifelike agents with MPML

Mitsuru Ishizuka*, Helmut Prendinger


研究成果: Article査読

20 被引用数 (Scopus)


In this paper, we provide an overview of our research on multimodal media and contents using embodied lifelike agents. In particular we describe our research centered on MPML (Multimodal Presentation Markup Language). MPML allows people to write and produce multimodal contents easily, and serves as a core for integrating various components and functionalities important for multimodal media. To demonstrate the benefits and usability of MPML in a variety of environments including animated Web, 3D VRML space, mobile phones, and the physical world with a humanoid robot, several versions of MPML have been developed while keeping its basic format. Since emotional behavior of the agent is an important factor for making agents lifelike and for being accepted by people as an attractive and friendly human-computer interaction style, emotion-related functions have been emphasized in MPML. In order to alleviate the workload of authoring the contents, it is also required to endow the agents with a certain level of autonomy. We show some of our approaches towards this end.

ジャーナルNew Generation Computing
出版ステータスPublished - 2006

ASJC Scopus subject areas

  • ハードウェアとアーキテクチャ
  • 理論的コンピュータサイエンス
  • 計算理論と計算数学


「Describing and generating multimodal contents featuring affective lifelike agents with MPML」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。