Face and gesture capturing and cloning for life-like agent

研究成果: Paper査読

抄録

Face and gesture cloning is essential to make a life-like agent more believable and to give it a personality and a character of target person. To realize cloning, an accurate face capture and motion capture are inevitable to get corpus data about face expressions, speaking scenes and gestures. In this paper, our recent approach to capture the personal feature of face and gesture is presented. For the face capturing, a face location and angles are estimated from video sequence with personal 3D face model and then a synthetic face model data is imposed into frames to realize automatic stand-in system or multimodal translation system.. A stand-in is a common technique for movies and TV programs in foreign languages. The current stand-in that only substitutes the voice channel results awkward matching to the mouth motion. Videophone with automatic voice translation are expected to be widely used in the near future, which may face the same problem without lip- synchronized speaking face image translation. In this paper, we introduce a method to track motion of the face from the video image and then replace the face part or only mouth part with synthesized one which is synchronized with synthetic voice or spoken voice. This is one of the key technologies not only for speaking image translation and communication system, but also for an interactive entertainment system. Also, an interactive movie system is introduced as an application of entertainment system. Capturing and copying a facial expression based on a physics base facial muscle constraint has been already presented[6]. So in this paper, this part is not described. For a gesture capturing, commercially available motion capture products give us fairly precise movements of human body segments but do not measure enough information to define skeletal posture in its entirety. This paper describes how to obtain the complete posture of skeletal structure with the help of marker locations relative to bones that are derived from MRI data sets.

本文言語English
ページ171-176
ページ数6
出版ステータスPublished - 2004 12 1
イベントRO-MAN 2004 - 13th IEEE International Workshop on Robot and Human Interactive Communication - Okayama, Japan
継続期間: 2004 9 202004 9 22

Conference

ConferenceRO-MAN 2004 - 13th IEEE International Workshop on Robot and Human Interactive Communication
CountryJapan
CityOkayama
Period04/9/2004/9/22

ASJC Scopus subject areas

  • Engineering(all)

フィンガープリント 「Face and gesture capturing and cloning for life-like agent」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル