Face and gesture capturing and cloning for life-like agent

Research output: Contribution to conferencePaper

Abstract

Face and gesture cloning is essential to make a life-like agent more believable and to give it a personality and a character of target person. To realize cloning, an accurate face capture and motion capture are inevitable to get corpus data about face expressions, speaking scenes and gestures. In this paper, our recent approach to capture the personal feature of face and gesture is presented. For the face capturing, a face location and angles are estimated from video sequence with personal 3D face model and then a synthetic face model data is imposed into frames to realize automatic stand-in system or multimodal translation system.. A stand-in is a common technique for movies and TV programs in foreign languages. The current stand-in that only substitutes the voice channel results awkward matching to the mouth motion. Videophone with automatic voice translation are expected to be widely used in the near future, which may face the same problem without lip- synchronized speaking face image translation. In this paper, we introduce a method to track motion of the face from the video image and then replace the face part or only mouth part with synthesized one which is synchronized with synthetic voice or spoken voice. This is one of the key technologies not only for speaking image translation and communication system, but also for an interactive entertainment system. Also, an interactive movie system is introduced as an application of entertainment system. Capturing and copying a facial expression based on a physics base facial muscle constraint has been already presented[6]. So in this paper, this part is not described. For a gesture capturing, commercially available motion capture products give us fairly precise movements of human body segments but do not measure enough information to define skeletal posture in its entirety. This paper describes how to obtain the complete posture of skeletal structure with the help of marker locations relative to bones that are derived from MRI data sets.

Original languageEnglish
Pages171-176
Number of pages6
Publication statusPublished - 2004 Dec 1
EventRO-MAN 2004 - 13th IEEE International Workshop on Robot and Human Interactive Communication - Okayama, Japan
Duration: 2004 Sep 202004 Sep 22

Conference

ConferenceRO-MAN 2004 - 13th IEEE International Workshop on Robot and Human Interactive Communication
CountryJapan
CityOkayama
Period04/9/2004/9/22

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Face and gesture capturing and cloning for life-like agent'. Together they form a unique fingerprint.

  • Cite this

    Morishima, S. (2004). Face and gesture capturing and cloning for life-like agent. 171-176. Paper presented at RO-MAN 2004 - 13th IEEE International Workshop on Robot and Human Interactive Communication, Okayama, Japan.