TY - GEN
T1 - Facial image synthesis by hierarchical wire frame model
AU - Kitamura, Yasuichi
AU - Nagashima, Yoshio
AU - Ohya, Jun
AU - Kishino, Fumio
PY - 1992
Y1 - 1992
N2 - We have studied a generation of realistic computer graphics facial action synchronized with actual facial actions. This paper describes a method of extracting facial feature points and reproducing facial actions for a virtual space teleconferencing system that achieves a realistic virtual presence. First, we need the individual facial wire frame model. We use a 3D digitizer or both the front and side images of the face. Second, we trace the feature points, the points around both the eyes and the mouth. For this purpose, we watch the eye regions and mouth region. If they move, the intensity of the image changes and we are able to find the eyes and the mouth. From facial action images, we cannot extract the deformation of the facial skin. Only from the front view of the face, tracing the eye regions and mouth region, can the movement of these regions in 2D space be extracted. We are proposing a new hierarchical wire frame model that can represent facial actions including wrinkles. The lower layer of the wire frame moves according to movement of the feature points. The upper layer slides over the lower layer and is deformed based on the movement of the lower layer. By applying this method to a telecommunication system, we confirm very realistic facial action in virtual space.
AB - We have studied a generation of realistic computer graphics facial action synchronized with actual facial actions. This paper describes a method of extracting facial feature points and reproducing facial actions for a virtual space teleconferencing system that achieves a realistic virtual presence. First, we need the individual facial wire frame model. We use a 3D digitizer or both the front and side images of the face. Second, we trace the feature points, the points around both the eyes and the mouth. For this purpose, we watch the eye regions and mouth region. If they move, the intensity of the image changes and we are able to find the eyes and the mouth. From facial action images, we cannot extract the deformation of the facial skin. Only from the front view of the face, tracing the eye regions and mouth region, can the movement of these regions in 2D space be extracted. We are proposing a new hierarchical wire frame model that can represent facial actions including wrinkles. The lower layer of the wire frame moves according to movement of the feature points. The upper layer slides over the lower layer and is deformed based on the movement of the lower layer. By applying this method to a telecommunication system, we confirm very realistic facial action in virtual space.
UR - http://www.scopus.com/inward/record.url?scp=0026973503&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0026973503&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:0026973503
SN - 0819410187
T3 - Proceedings of SPIE - The International Society for Optical Engineering
SP - 1358
EP - 1365
BT - Proceedings of SPIE - The International Society for Optical Engineering
PB - Publ by Int Soc for Optical Engineering
T2 - Visual Communications and Image Processing '92
Y2 - 18 November 1992 through 20 November 1992
ER -