Facial image synthesis by hierarchical wire frame model

Yasuichi Kitamura, Yoshio Nagashima, Jun Ohya, Fumio Kishino

研究成果: Conference contribution

2 引用 (Scopus)

抄録

We have studied a generation of realistic computer graphics facial action synchronized with actual facial actions. This paper describes a method of extracting facial feature points and reproducing facial actions for a virtual space teleconferencing system that achieves a realistic virtual presence. First, we need the individual facial wire frame model. We use a 3D digitizer or both the front and side images of the face. Second, we trace the feature points, the points around both the eyes and the mouth. For this purpose, we watch the eye regions and mouth region. If they move, the intensity of the image changes and we are able to find the eyes and the mouth. From facial action images, we cannot extract the deformation of the facial skin. Only from the front view of the face, tracing the eye regions and mouth region, can the movement of these regions in 2D space be extracted. We are proposing a new hierarchical wire frame model that can represent facial actions including wrinkles. The lower layer of the wire frame moves according to movement of the feature points. The upper layer slides over the lower layer and is deformed based on the movement of the lower layer. By applying this method to a telecommunication system, we confirm very realistic facial action in virtual space.

元の言語English
ホスト出版物のタイトルProceedings of SPIE - The International Society for Optical Engineering
出版場所Bellingham, WA, United States
出版者Publ by Int Soc for Optical Engineering
ページ1358-1365
ページ数8
1818
エディションpt 3
ISBN(印刷物)0819410187
出版物ステータスPublished - 1992
外部発表Yes
イベントVisual Communications and Image Processing '92 - Boston, MA, USA
継続期間: 1992 11 181992 11 20

Other

OtherVisual Communications and Image Processing '92
Boston, MA, USA
期間92/11/1892/11/20

Fingerprint

mouth
wire
Wire
synthesis
Teleconferencing
Telecommunication systems
teleconferencing
Computer graphics
computer graphics
Skin
analog to digital converters
tracing
chutes
clocks
telecommunication

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Condensed Matter Physics

これを引用

Kitamura, Y., Nagashima, Y., Ohya, J., & Kishino, F. (1992). Facial image synthesis by hierarchical wire frame model. : Proceedings of SPIE - The International Society for Optical Engineering (pt 3 版, 巻 1818, pp. 1358-1365). Bellingham, WA, United States: Publ by Int Soc for Optical Engineering.

Facial image synthesis by hierarchical wire frame model. / Kitamura, Yasuichi; Nagashima, Yoshio; Ohya, Jun; Kishino, Fumio.

Proceedings of SPIE - The International Society for Optical Engineering. 巻 1818 pt 3. 編 Bellingham, WA, United States : Publ by Int Soc for Optical Engineering, 1992. p. 1358-1365.

研究成果: Conference contribution

Kitamura, Y, Nagashima, Y, Ohya, J & Kishino, F 1992, Facial image synthesis by hierarchical wire frame model. : Proceedings of SPIE - The International Society for Optical Engineering. pt 3 Edn, 巻. 1818, Publ by Int Soc for Optical Engineering, Bellingham, WA, United States, pp. 1358-1365, Visual Communications and Image Processing '92, Boston, MA, USA, 92/11/18.
Kitamura Y, Nagashima Y, Ohya J, Kishino F. Facial image synthesis by hierarchical wire frame model. : Proceedings of SPIE - The International Society for Optical Engineering. pt 3 版 巻 1818. Bellingham, WA, United States: Publ by Int Soc for Optical Engineering. 1992. p. 1358-1365
Kitamura, Yasuichi ; Nagashima, Yoshio ; Ohya, Jun ; Kishino, Fumio. / Facial image synthesis by hierarchical wire frame model. Proceedings of SPIE - The International Society for Optical Engineering. 巻 1818 pt 3. 版 Bellingham, WA, United States : Publ by Int Soc for Optical Engineering, 1992. pp. 1358-1365
@inproceedings{5553ebb5644945babae043c43da4defe,
title = "Facial image synthesis by hierarchical wire frame model",
abstract = "We have studied a generation of realistic computer graphics facial action synchronized with actual facial actions. This paper describes a method of extracting facial feature points and reproducing facial actions for a virtual space teleconferencing system that achieves a realistic virtual presence. First, we need the individual facial wire frame model. We use a 3D digitizer or both the front and side images of the face. Second, we trace the feature points, the points around both the eyes and the mouth. For this purpose, we watch the eye regions and mouth region. If they move, the intensity of the image changes and we are able to find the eyes and the mouth. From facial action images, we cannot extract the deformation of the facial skin. Only from the front view of the face, tracing the eye regions and mouth region, can the movement of these regions in 2D space be extracted. We are proposing a new hierarchical wire frame model that can represent facial actions including wrinkles. The lower layer of the wire frame moves according to movement of the feature points. The upper layer slides over the lower layer and is deformed based on the movement of the lower layer. By applying this method to a telecommunication system, we confirm very realistic facial action in virtual space.",
author = "Yasuichi Kitamura and Yoshio Nagashima and Jun Ohya and Fumio Kishino",
year = "1992",
language = "English",
isbn = "0819410187",
volume = "1818",
pages = "1358--1365",
booktitle = "Proceedings of SPIE - The International Society for Optical Engineering",
publisher = "Publ by Int Soc for Optical Engineering",
edition = "pt 3",

}

TY - GEN

T1 - Facial image synthesis by hierarchical wire frame model

AU - Kitamura, Yasuichi

AU - Nagashima, Yoshio

AU - Ohya, Jun

AU - Kishino, Fumio

PY - 1992

Y1 - 1992

N2 - We have studied a generation of realistic computer graphics facial action synchronized with actual facial actions. This paper describes a method of extracting facial feature points and reproducing facial actions for a virtual space teleconferencing system that achieves a realistic virtual presence. First, we need the individual facial wire frame model. We use a 3D digitizer or both the front and side images of the face. Second, we trace the feature points, the points around both the eyes and the mouth. For this purpose, we watch the eye regions and mouth region. If they move, the intensity of the image changes and we are able to find the eyes and the mouth. From facial action images, we cannot extract the deformation of the facial skin. Only from the front view of the face, tracing the eye regions and mouth region, can the movement of these regions in 2D space be extracted. We are proposing a new hierarchical wire frame model that can represent facial actions including wrinkles. The lower layer of the wire frame moves according to movement of the feature points. The upper layer slides over the lower layer and is deformed based on the movement of the lower layer. By applying this method to a telecommunication system, we confirm very realistic facial action in virtual space.

AB - We have studied a generation of realistic computer graphics facial action synchronized with actual facial actions. This paper describes a method of extracting facial feature points and reproducing facial actions for a virtual space teleconferencing system that achieves a realistic virtual presence. First, we need the individual facial wire frame model. We use a 3D digitizer or both the front and side images of the face. Second, we trace the feature points, the points around both the eyes and the mouth. For this purpose, we watch the eye regions and mouth region. If they move, the intensity of the image changes and we are able to find the eyes and the mouth. From facial action images, we cannot extract the deformation of the facial skin. Only from the front view of the face, tracing the eye regions and mouth region, can the movement of these regions in 2D space be extracted. We are proposing a new hierarchical wire frame model that can represent facial actions including wrinkles. The lower layer of the wire frame moves according to movement of the feature points. The upper layer slides over the lower layer and is deformed based on the movement of the lower layer. By applying this method to a telecommunication system, we confirm very realistic facial action in virtual space.

UR - http://www.scopus.com/inward/record.url?scp=0026973503&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026973503&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0026973503

SN - 0819410187

VL - 1818

SP - 1358

EP - 1365

BT - Proceedings of SPIE - The International Society for Optical Engineering

PB - Publ by Int Soc for Optical Engineering

CY - Bellingham, WA, United States

ER -