Construction of audio-visual speech corpus using motion-capture system and corpus based facial animation

Tatsuo Yotsukura*, Shigeo Morishima, Satoshi Nakamura

*この研究の対応する著者

研究成果: Article査読

4 被引用数 (Scopus)

抄録

An accurate audio-visual speech corpus is inevitable for talking-heads research. This paper presents our audio-visual speech corpus collection and proposes a head-movement normalization method and a facial motion generation method. The audio-visual corpus contains speech data, movie data on faces, and positions and movements of facial organs. The corpus consists of Japanese phoneme-balanced sentences uttered by a female native speaker. An accurate facial capture is realized by using an optical motion-capture system. We captured high-resolution 3D data by arranging many markers on the speaker's face. In addition, we propose a method of acquiring the facial movements and removing head movements by using affine transformation for computing displacements of pure facial organs. Finally, in order to easily create facial animation from this motion data, we propose a technique assigning the captured data to the facial polygon model. Evaluation results demonstrate the effectiveness of the proposed facial motion generation method and show the relationship between the number of markers and errors.

本文言語English
ページ(範囲)2477-2483
ページ数7
ジャーナルIEICE Transactions on Information and Systems
E88-D
11
DOI
出版ステータスPublished - 2005 11

ASJC Scopus subject areas

  • ソフトウェア
  • ハードウェアとアーキテクチャ
  • コンピュータ ビジョンおよびパターン認識
  • 電子工学および電気工学
  • 人工知能

フィンガープリント

「Construction of audio-visual speech corpus using motion-capture system and corpus based facial animation」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル