Automatic facial animation generation system of dancing characters considering emotion in dance and music

Wakana Asahina, Narumi Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, Shigeo Morishima

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    1 Citation (Scopus)

    Abstract

    In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effec- tively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call "dance emotion"). In previ- ous work considering music features, DiPaola et al. [2006] pro- posed music-driven emotionally expressive face system. To de- tect the mood of the input music, they used a hierarchical frame- work (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychologi- cal rules that uses score information, so they requires MIDI data. In this paper, we propose "dance emotion model" to visualize danc- ing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance mo- tion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.

    Original languageEnglish
    Title of host publicationSIGGRAPH Asia 2015 Posters, SA 2015
    PublisherAssociation for Computing Machinery, Inc
    ISBN (Print)9781450339261
    DOIs
    Publication statusPublished - 2015 Nov 2
    EventSIGGRAPH Asia, SA 2015 - Kobe, Japan
    Duration: 2015 Nov 22015 Nov 6

    Other

    OtherSIGGRAPH Asia, SA 2015
    CountryJapan
    CityKobe
    Period15/11/215/11/6

    ASJC Scopus subject areas

    • Human-Computer Interaction
    • Computer Graphics and Computer-Aided Design
    • Computer Vision and Pattern Recognition

    Fingerprint Dive into the research topics of 'Automatic facial animation generation system of dancing characters considering emotion in dance and music'. Together they form a unique fingerprint.

  • Cite this

    Asahina, W., Okada, N., Iwamoto, N., Masuda, T., Fukusato, T., & Morishima, S. (2015). Automatic facial animation generation system of dancing characters considering emotion in dance and music. In SIGGRAPH Asia 2015 Posters, SA 2015 [a11] Association for Computing Machinery, Inc. https://doi.org/10.1145/2820926.2820935