This paper proposes a new real-time method for reproducing facial expressions in 3D face models realistically based on anatomy for artists. To reproduce facial expressions in a face model, the detected expressions need to be converted to the data for deforming the face model. In the proposed method, an artist who has learned anatomy for artists creates arbitrary facial expressions in the 3D face model by mixing the reference expressions chosen by the artist so that the synthesized expressions realistically represent the respective expressions displayed by real persons. The parameters obtained by this manual operations are used to construct the equations that convert the expression features obtained by the detection module to the displacement vectors of the vertices of the face model. During human communications through face models, the equations are used to reproduce the detected expressions in real-time. The effectiveness and robustness of the proposed method were demonstrated by experimental results and demonstration systems.
|出版ステータス||Published - 1999 1月 1|
|イベント||Proceedings of the 1999 6th International Conference on Multimedia Computing and Systems - IEEE ICMCS'99 - Florence, Italy|
継続期間: 1999 6月 7 → 1999 6月 11
|Other||Proceedings of the 1999 6th International Conference on Multimedia Computing and Systems - IEEE ICMCS'99|
|Period||99/6/7 → 99/6/11|
ASJC Scopus subject areas
- コンピュータ サイエンス（全般）