Automatic facial animation generation system of dancing characters considering emotion in dance and music

Wakana Asahina, Narumi Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, Shigeo Morishima

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    1 Citation (Scopus)

    Abstract

    In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effec- tively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call "dance emotion"). In previ- ous work considering music features, DiPaola et al. [2006] pro- posed music-driven emotionally expressive face system. To de- tect the mood of the input music, they used a hierarchical frame- work (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychologi- cal rules that uses score information, so they requires MIDI data. In this paper, we propose "dance emotion model" to visualize danc- ing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance mo- tion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.

    Original languageEnglish
    Title of host publicationSIGGRAPH Asia 2015 Posters, SA 2015
    PublisherAssociation for Computing Machinery, Inc
    ISBN (Print)9781450339261
    DOIs
    Publication statusPublished - 2015 Nov 2
    EventSIGGRAPH Asia, SA 2015 - Kobe, Japan
    Duration: 2015 Nov 22015 Nov 6

    Other

    OtherSIGGRAPH Asia, SA 2015
    CountryJapan
    CityKobe
    Period15/11/215/11/6

    Fingerprint

    Animation
    Information use
    Experiments

    ASJC Scopus subject areas

    • Human-Computer Interaction
    • Computer Graphics and Computer-Aided Design
    • Computer Vision and Pattern Recognition

    Cite this

    Asahina, W., Okada, N., Iwamoto, N., Masuda, T., Fukusato, T., & Morishima, S. (2015). Automatic facial animation generation system of dancing characters considering emotion in dance and music. In SIGGRAPH Asia 2015 Posters, SA 2015 [a11] Association for Computing Machinery, Inc. https://doi.org/10.1145/2820926.2820935

    Automatic facial animation generation system of dancing characters considering emotion in dance and music. / Asahina, Wakana; Okada, Narumi; Iwamoto, Naoya; Masuda, Taro; Fukusato, Tsukasa; Morishima, Shigeo.

    SIGGRAPH Asia 2015 Posters, SA 2015. Association for Computing Machinery, Inc, 2015. a11.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Asahina, W, Okada, N, Iwamoto, N, Masuda, T, Fukusato, T & Morishima, S 2015, Automatic facial animation generation system of dancing characters considering emotion in dance and music. in SIGGRAPH Asia 2015 Posters, SA 2015., a11, Association for Computing Machinery, Inc, SIGGRAPH Asia, SA 2015, Kobe, Japan, 15/11/2. https://doi.org/10.1145/2820926.2820935
    Asahina W, Okada N, Iwamoto N, Masuda T, Fukusato T, Morishima S. Automatic facial animation generation system of dancing characters considering emotion in dance and music. In SIGGRAPH Asia 2015 Posters, SA 2015. Association for Computing Machinery, Inc. 2015. a11 https://doi.org/10.1145/2820926.2820935
    Asahina, Wakana ; Okada, Narumi ; Iwamoto, Naoya ; Masuda, Taro ; Fukusato, Tsukasa ; Morishima, Shigeo. / Automatic facial animation generation system of dancing characters considering emotion in dance and music. SIGGRAPH Asia 2015 Posters, SA 2015. Association for Computing Machinery, Inc, 2015.
    @inproceedings{4d33935763cc441aaa6f14650fe74c69,
    title = "Automatic facial animation generation system of dancing characters considering emotion in dance and music",
    abstract = "In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effec- tively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call {"}dance emotion{"}). In previ- ous work considering music features, DiPaola et al. [2006] pro- posed music-driven emotionally expressive face system. To de- tect the mood of the input music, they used a hierarchical frame- work (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychologi- cal rules that uses score information, so they requires MIDI data. In this paper, we propose {"}dance emotion model{"} to visualize danc- ing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance mo- tion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.",
    author = "Wakana Asahina and Narumi Okada and Naoya Iwamoto and Taro Masuda and Tsukasa Fukusato and Shigeo Morishima",
    year = "2015",
    month = "11",
    day = "2",
    doi = "10.1145/2820926.2820935",
    language = "English",
    isbn = "9781450339261",
    booktitle = "SIGGRAPH Asia 2015 Posters, SA 2015",
    publisher = "Association for Computing Machinery, Inc",

    }

    TY - GEN

    T1 - Automatic facial animation generation system of dancing characters considering emotion in dance and music

    AU - Asahina, Wakana

    AU - Okada, Narumi

    AU - Iwamoto, Naoya

    AU - Masuda, Taro

    AU - Fukusato, Tsukasa

    AU - Morishima, Shigeo

    PY - 2015/11/2

    Y1 - 2015/11/2

    N2 - In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effec- tively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call "dance emotion"). In previ- ous work considering music features, DiPaola et al. [2006] pro- posed music-driven emotionally expressive face system. To de- tect the mood of the input music, they used a hierarchical frame- work (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychologi- cal rules that uses score information, so they requires MIDI data. In this paper, we propose "dance emotion model" to visualize danc- ing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance mo- tion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.

    AB - In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effec- tively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call "dance emotion"). In previ- ous work considering music features, DiPaola et al. [2006] pro- posed music-driven emotionally expressive face system. To de- tect the mood of the input music, they used a hierarchical frame- work (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychologi- cal rules that uses score information, so they requires MIDI data. In this paper, we propose "dance emotion model" to visualize danc- ing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance mo- tion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.

    UR - http://www.scopus.com/inward/record.url?scp=84959328445&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84959328445&partnerID=8YFLogxK

    U2 - 10.1145/2820926.2820935

    DO - 10.1145/2820926.2820935

    M3 - Conference contribution

    SN - 9781450339261

    BT - SIGGRAPH Asia 2015 Posters, SA 2015

    PB - Association for Computing Machinery, Inc

    ER -