Abstract
A report on synthesis of virtual human or avatar to project the features with a realistic texture-mapped face to generate facial expression and action controlled by a multimodal input signal is reported. The report covers the face fitting tool from multiview camera images to make 3-D face model and the voice signal is used to determine the mouth shape feature when an avatar is speaking.
Original language | English |
---|---|
Pages (from-to) | 26-34 |
Number of pages | 9 |
Journal | IEEE Signal Processing Magazine |
Volume | 18 |
Issue number | 3 |
DOIs | |
Publication status | Published - 2001 May |
Externally published | Yes |
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering
- Applied Mathematics