Data-driven speech animation synthesis focusing on realistic inside of the mouth

Masahide Kawai, Tomoyori Iwao, Daisuke Mima, Akinobu Maejima, Shigeo Morishima

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

Speech animation synthesis is still a challenging topic in the field of computer graphics. Despite many challenges, representing detailed appearance of inner mouth such as nipping tongue's tip with teeth and tongue's back hasn't been achieved in the resulting animation. To solve this problem, we propose a method of data-driven speech animation synthesis especially when focusing on the inside of the mouth. First, we classify inner mouth into teeth labeling opening distance of the teeth and a tongue according to phoneme information. We then insert them into existing speech animation based on opening distance of the teeth and phoneme information. Finally, we apply patch-based texture synthesis technique with a 2,213 images database created from 7 subjects to the resulting animation. By using the proposed method, we can automatically generate a speech animation with the realistic inner mouth from the existing speech animation created by previous methods.

Original languageEnglish
Pages (from-to)401-409
Number of pages9
JournalJournal of information processing
Volume22
Issue number2
DOIs
Publication statusPublished - 2014

Keywords

  • Detai-lization
  • Inner mouth
  • Phoneme combination
  • Skull bone
  • Speech animation

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Data-driven speech animation synthesis focusing on realistic inside of the mouth'. Together they form a unique fingerprint.

Cite this