Real-Time 3-D Facial Image Reconstruction for Virtual Space Teleconferencing

Kazuyuki Ebihara, Noriko Suzuki, Jun Ohya, Fumio Kishino

Research output: Contribution to journalArticle

Abstract

In this paper, a new method was proposed for implementing 3-D facial image models designed to allow faithful reconstruction of facial images in virtual space teleconferencing by using 3-D measurement while detecting various facial expressions. In the method proposed, first many dots are painted on the face. Then, for a variety of facial expressions (eight expressions in this study) selected with regard to the action of the major facial muscles, 3-D displacement vectors for the normal (neutral) face are measured at each dot and recorded in the facial image plane as reference vectors. When the facial image is reconstructed, 2-D displacement vectors are detected by tracking markers that were previously placed on the face and are represented as the sum of two enclosing reference vectors. Based on these data, the vertices of a 3-D wireframe face model (WFM) are moved appropriately, and the facial expression is reconstructed.

    Fingerprint

Keywords

  • 3-D model
  • Facial image
  • Real-time processing
  • Reconstruction
  • Virtual space teleconferencing

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this