Artistic anatomy based, real-time reproduction of facial expressions in 3D face models

Jun Ohya, Kazuyuki Ebihara, Jun Kurumisawa

研究成果: Chapter

3 引用 (Scopus)

抄録

This paper proposes a new real-time method for reproducing facial expressions in 3D face models realistically based on anatomy for artists. To reproduce facial expressions in a face model, the detected expressions need to be converted to the data for deforming the face model. In the proposed method, an artist who has learned anatomy for artists creates arbitrary facial expressions in the 3D face model by mixing the reference expressions chosen by the artist so that the synthesized expressions realistically represent the respective expressions displayed by real persons. The parameters obtained by this manual operations are used to construct the equations that convert the expression features obtained by the detection module to the displacement vectors of the vertices of the face model. During human communications through face models, the equations are used to reproduce the detected expressions in real-time. The effectiveness and robustness of the proposed method were demonstrated by experimental results and demonstration systems.

元の言語English
ホスト出版物のタイトルInternational Conference on Multimedia Computing and Systems -Proceedings
出版場所Los Alamitos, CA, United States
出版者IEEE
ページ684-689
ページ数6
2
出版物ステータスPublished - 1999
外部発表Yes
イベントProceedings of the 1999 6th International Conference on Multimedia Computing and Systems - IEEE ICMCS'99 - Florence, Italy
継続期間: 1999 6 71999 6 11

Other

OtherProceedings of the 1999 6th International Conference on Multimedia Computing and Systems - IEEE ICMCS'99
Florence, Italy
期間99/6/799/6/11

Fingerprint

Demonstrations
Communication

ASJC Scopus subject areas

  • Computer Science(all)
  • Engineering(all)

これを引用

Ohya, J., Ebihara, K., & Kurumisawa, J. (1999). Artistic anatomy based, real-time reproduction of facial expressions in 3D face models. : International Conference on Multimedia Computing and Systems -Proceedings (巻 2, pp. 684-689). Los Alamitos, CA, United States: IEEE.

Artistic anatomy based, real-time reproduction of facial expressions in 3D face models. / Ohya, Jun; Ebihara, Kazuyuki; Kurumisawa, Jun.

International Conference on Multimedia Computing and Systems -Proceedings. 巻 2 Los Alamitos, CA, United States : IEEE, 1999. p. 684-689.

研究成果: Chapter

Ohya, J, Ebihara, K & Kurumisawa, J 1999, Artistic anatomy based, real-time reproduction of facial expressions in 3D face models. : International Conference on Multimedia Computing and Systems -Proceedings. 巻. 2, IEEE, Los Alamitos, CA, United States, pp. 684-689, Proceedings of the 1999 6th International Conference on Multimedia Computing and Systems - IEEE ICMCS'99, Florence, Italy, 99/6/7.
Ohya J, Ebihara K, Kurumisawa J. Artistic anatomy based, real-time reproduction of facial expressions in 3D face models. : International Conference on Multimedia Computing and Systems -Proceedings. 巻 2. Los Alamitos, CA, United States: IEEE. 1999. p. 684-689
Ohya, Jun ; Ebihara, Kazuyuki ; Kurumisawa, Jun. / Artistic anatomy based, real-time reproduction of facial expressions in 3D face models. International Conference on Multimedia Computing and Systems -Proceedings. 巻 2 Los Alamitos, CA, United States : IEEE, 1999. pp. 684-689
@inbook{b4349f4b443b4126b7a77b1340651cba,
title = "Artistic anatomy based, real-time reproduction of facial expressions in 3D face models",
abstract = "This paper proposes a new real-time method for reproducing facial expressions in 3D face models realistically based on anatomy for artists. To reproduce facial expressions in a face model, the detected expressions need to be converted to the data for deforming the face model. In the proposed method, an artist who has learned anatomy for artists creates arbitrary facial expressions in the 3D face model by mixing the reference expressions chosen by the artist so that the synthesized expressions realistically represent the respective expressions displayed by real persons. The parameters obtained by this manual operations are used to construct the equations that convert the expression features obtained by the detection module to the displacement vectors of the vertices of the face model. During human communications through face models, the equations are used to reproduce the detected expressions in real-time. The effectiveness and robustness of the proposed method were demonstrated by experimental results and demonstration systems.",
author = "Jun Ohya and Kazuyuki Ebihara and Jun Kurumisawa",
year = "1999",
language = "English",
volume = "2",
pages = "684--689",
booktitle = "International Conference on Multimedia Computing and Systems -Proceedings",
publisher = "IEEE",

}

TY - CHAP

T1 - Artistic anatomy based, real-time reproduction of facial expressions in 3D face models

AU - Ohya, Jun

AU - Ebihara, Kazuyuki

AU - Kurumisawa, Jun

PY - 1999

Y1 - 1999

N2 - This paper proposes a new real-time method for reproducing facial expressions in 3D face models realistically based on anatomy for artists. To reproduce facial expressions in a face model, the detected expressions need to be converted to the data for deforming the face model. In the proposed method, an artist who has learned anatomy for artists creates arbitrary facial expressions in the 3D face model by mixing the reference expressions chosen by the artist so that the synthesized expressions realistically represent the respective expressions displayed by real persons. The parameters obtained by this manual operations are used to construct the equations that convert the expression features obtained by the detection module to the displacement vectors of the vertices of the face model. During human communications through face models, the equations are used to reproduce the detected expressions in real-time. The effectiveness and robustness of the proposed method were demonstrated by experimental results and demonstration systems.

AB - This paper proposes a new real-time method for reproducing facial expressions in 3D face models realistically based on anatomy for artists. To reproduce facial expressions in a face model, the detected expressions need to be converted to the data for deforming the face model. In the proposed method, an artist who has learned anatomy for artists creates arbitrary facial expressions in the 3D face model by mixing the reference expressions chosen by the artist so that the synthesized expressions realistically represent the respective expressions displayed by real persons. The parameters obtained by this manual operations are used to construct the equations that convert the expression features obtained by the detection module to the displacement vectors of the vertices of the face model. During human communications through face models, the equations are used to reproduce the detected expressions in real-time. The effectiveness and robustness of the proposed method were demonstrated by experimental results and demonstration systems.

UR - http://www.scopus.com/inward/record.url?scp=0032640796&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0032640796&partnerID=8YFLogxK

M3 - Chapter

VL - 2

SP - 684

EP - 689

BT - International Conference on Multimedia Computing and Systems -Proceedings

PB - IEEE

CY - Los Alamitos, CA, United States

ER -