A report on synthesis of virtual human or avatar to project the features with a realistic texture-mapped face to generate facial expression and action controlled by a multimodal input signal is reported. The report covers the face fitting tool from multiview camera images to make 3-D face model and the voice signal is used to determine the mouth shape feature when an avatar is speaking.
ASJC Scopus subject areas
- Electrical and Electronic Engineering
- Signal Processing