In this paper, we describe a recent research results about how to generate an avatar's face on a real-time process exactly copying a real person's face. It is very important for synthesis of a real avatar to duplicate emotion and impression precisely included in original face image and voice. Face fitting tool from multi-angle camera images is introduced to make a real 3D face model with real texture and geometry very close to the original. When avatar is speaking something, voice signal is very essential to decide a mouth shape feature. So real-time mouth shape control mechanism is proposed by conversion from speech parameters to lip shape parameters using multilayered neural network. For dynamic modeling of facial expression, muscle structure constraint is introduced to generate a facial expression naturally with a few parameters. We also tried to get muscle parameters automatically to decide an expression from local motion vector on face calculated by optical flow in video sequence. Finally an approach that enables the modeling emotions appearing on faces. A system with this approach helps to analyze, synthesize and code face images at the emotional level.
|出版ステータス||Published - 2000 12 1|
|イベント||10th IEEE Workshop on Neural Netwoks for Signal Processing (NNSP2000) - Sydney, Australia|
継続期間: 2000 12 11 → 2000 12 13
|Other||10th IEEE Workshop on Neural Netwoks for Signal Processing (NNSP2000)|
|Period||00/12/11 → 00/12/13|
ASJC Scopus subject areas