Internet communication using real-time facial expression analysis and synthesis

Naiwala P. Chandrasiri, Takeshi Naemura, Mitsuru Ishizuka, Hiroshi Harashima, István Barakonyi

Research output: Contribution to journalArticle

10 Citations (Scopus)

Abstract

A system that animates 3D facial agents based on real-time facial expression analysis techniques and research or synthesizing facial expressions and text-to-speech capabilities is now available. The system consists of three main modules, including, a real-time facial expression analysis compoent that calculates the MPEG-4 FAP2, an effective 3D agent with facial expression synthesis and text-to-speech capabilities, and a communication module. Subjective evaluations involving graduate and undergraduate students confirm the communication system's effectiveness. Potential applications include virtual teleconferencing, entertainment, computer games, human-to-human communication training, and distance learning.

Original languageEnglish
Pages (from-to)20-29
Number of pages10
JournalIEEE Multimedia
Volume11
Issue number3
DOIs
Publication statusPublished - 2004 Jul
Externally publishedYes

ASJC Scopus subject areas

  • Hardware and Architecture
  • Information Systems
  • Computer Graphics and Computer-Aided Design
  • Software
  • Theoretical Computer Science
  • Computational Theory and Mathematics

Fingerprint Dive into the research topics of 'Internet communication using real-time facial expression analysis and synthesis'. Together they form a unique fingerprint.

  • Cite this

    Chandrasiri, N. P., Naemura, T., Ishizuka, M., Harashima, H., & Barakonyi, I. (2004). Internet communication using real-time facial expression analysis and synthesis. IEEE Multimedia, 11(3), 20-29. https://doi.org/10.1109/MMUL.2004.10