Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation

Nelson Yalta, Shinji Watanabe, Kazuhiro Nakadai, Tetsuya Ogata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Synthesizing human's movements such as dancing is a flourishing research field which has several applications in computer graphics. Recent studies have demonstrated the advantages of deep neural networks (DNNs) for achieving remarkable performance in motion and music tasks with little effort for feature pre-processing. However, applying DNNs for generating dance to a piece of music is nevertheless challenging, because of 1) DNNs need to generate large sequences while mapping the music input, 2) the DNN needs to constraint the motion beat to the music, and 3) DNNs require a considerable amount of hand-crafted data. In this study, we propose a weakly supervised deep recurrent method for real-time basic dance generation with audio power spectrum as input. The proposed model employs convolutional layers and a multilayered Long Short-Term memory (LSTM) to process the audio input. Then, another deep LSTM layer decodes the target dance sequence. Notably, this end-to-end approach has 1) an auto-conditioned decode configuration that reduces accumulation of feedback error of large dance sequence, 2) uses a contrastive cost function to regulate the mapping between the music and motion beat, and 3) trains with weak labels generated from the motion beat, reducing the amount of hand-crafted data. We evaluate the proposed network based on i) the similarities between generated and the baseline dancer motion with a cross entropy measure for large dance sequences, and ii) accurate timing between the music and motion beat with an F-measure. Experimental results revealed that, after training using a small dataset, the model generates basic dance steps with low cross entropy and maintains an F-measure score similar to that of a baseline dancer.

Original languageEnglish
Title of host publication2019 International Joint Conference on Neural Networks, IJCNN 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728119854
DOIs
Publication statusPublished - 2019 Jul
Event2019 International Joint Conference on Neural Networks, IJCNN 2019 - Budapest, Hungary
Duration: 2019 Jul 142019 Jul 19

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2019-July

Conference

Conference2019 International Joint Conference on Neural Networks, IJCNN 2019
CountryHungary
CityBudapest
Period19/7/1419/7/19

Fingerprint

Recurrent neural networks
Entropy
Computer graphics
Power spectrum
Cost functions
Labels
Deep neural networks
Feedback
Processing
Long short-term memory

Keywords

  • Contrastive loss
  • Dance generation
  • Deep recurrent networks

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Yalta, N., Watanabe, S., Nakadai, K., & Ogata, T. (2019). Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. In 2019 International Joint Conference on Neural Networks, IJCNN 2019 [8851872] (Proceedings of the International Joint Conference on Neural Networks; Vol. 2019-July). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IJCNN.2019.8851872

Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. / Yalta, Nelson; Watanabe, Shinji; Nakadai, Kazuhiro; Ogata, Tetsuya.

2019 International Joint Conference on Neural Networks, IJCNN 2019. Institute of Electrical and Electronics Engineers Inc., 2019. 8851872 (Proceedings of the International Joint Conference on Neural Networks; Vol. 2019-July).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yalta, N, Watanabe, S, Nakadai, K & Ogata, T 2019, Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. in 2019 International Joint Conference on Neural Networks, IJCNN 2019., 8851872, Proceedings of the International Joint Conference on Neural Networks, vol. 2019-July, Institute of Electrical and Electronics Engineers Inc., 2019 International Joint Conference on Neural Networks, IJCNN 2019, Budapest, Hungary, 19/7/14. https://doi.org/10.1109/IJCNN.2019.8851872
Yalta N, Watanabe S, Nakadai K, Ogata T. Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. In 2019 International Joint Conference on Neural Networks, IJCNN 2019. Institute of Electrical and Electronics Engineers Inc. 2019. 8851872. (Proceedings of the International Joint Conference on Neural Networks). https://doi.org/10.1109/IJCNN.2019.8851872
Yalta, Nelson ; Watanabe, Shinji ; Nakadai, Kazuhiro ; Ogata, Tetsuya. / Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. 2019 International Joint Conference on Neural Networks, IJCNN 2019. Institute of Electrical and Electronics Engineers Inc., 2019. (Proceedings of the International Joint Conference on Neural Networks).
@inproceedings{5f673104064144d4972cf01da6a9bc3d,
title = "Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation",
abstract = "Synthesizing human's movements such as dancing is a flourishing research field which has several applications in computer graphics. Recent studies have demonstrated the advantages of deep neural networks (DNNs) for achieving remarkable performance in motion and music tasks with little effort for feature pre-processing. However, applying DNNs for generating dance to a piece of music is nevertheless challenging, because of 1) DNNs need to generate large sequences while mapping the music input, 2) the DNN needs to constraint the motion beat to the music, and 3) DNNs require a considerable amount of hand-crafted data. In this study, we propose a weakly supervised deep recurrent method for real-time basic dance generation with audio power spectrum as input. The proposed model employs convolutional layers and a multilayered Long Short-Term memory (LSTM) to process the audio input. Then, another deep LSTM layer decodes the target dance sequence. Notably, this end-to-end approach has 1) an auto-conditioned decode configuration that reduces accumulation of feedback error of large dance sequence, 2) uses a contrastive cost function to regulate the mapping between the music and motion beat, and 3) trains with weak labels generated from the motion beat, reducing the amount of hand-crafted data. We evaluate the proposed network based on i) the similarities between generated and the baseline dancer motion with a cross entropy measure for large dance sequences, and ii) accurate timing between the music and motion beat with an F-measure. Experimental results revealed that, after training using a small dataset, the model generates basic dance steps with low cross entropy and maintains an F-measure score similar to that of a baseline dancer.",
keywords = "Contrastive loss, Dance generation, Deep recurrent networks",
author = "Nelson Yalta and Shinji Watanabe and Kazuhiro Nakadai and Tetsuya Ogata",
year = "2019",
month = "7",
doi = "10.1109/IJCNN.2019.8851872",
language = "English",
series = "Proceedings of the International Joint Conference on Neural Networks",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
booktitle = "2019 International Joint Conference on Neural Networks, IJCNN 2019",

}

TY - GEN

T1 - Weakly-Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation

AU - Yalta, Nelson

AU - Watanabe, Shinji

AU - Nakadai, Kazuhiro

AU - Ogata, Tetsuya

PY - 2019/7

Y1 - 2019/7

N2 - Synthesizing human's movements such as dancing is a flourishing research field which has several applications in computer graphics. Recent studies have demonstrated the advantages of deep neural networks (DNNs) for achieving remarkable performance in motion and music tasks with little effort for feature pre-processing. However, applying DNNs for generating dance to a piece of music is nevertheless challenging, because of 1) DNNs need to generate large sequences while mapping the music input, 2) the DNN needs to constraint the motion beat to the music, and 3) DNNs require a considerable amount of hand-crafted data. In this study, we propose a weakly supervised deep recurrent method for real-time basic dance generation with audio power spectrum as input. The proposed model employs convolutional layers and a multilayered Long Short-Term memory (LSTM) to process the audio input. Then, another deep LSTM layer decodes the target dance sequence. Notably, this end-to-end approach has 1) an auto-conditioned decode configuration that reduces accumulation of feedback error of large dance sequence, 2) uses a contrastive cost function to regulate the mapping between the music and motion beat, and 3) trains with weak labels generated from the motion beat, reducing the amount of hand-crafted data. We evaluate the proposed network based on i) the similarities between generated and the baseline dancer motion with a cross entropy measure for large dance sequences, and ii) accurate timing between the music and motion beat with an F-measure. Experimental results revealed that, after training using a small dataset, the model generates basic dance steps with low cross entropy and maintains an F-measure score similar to that of a baseline dancer.

AB - Synthesizing human's movements such as dancing is a flourishing research field which has several applications in computer graphics. Recent studies have demonstrated the advantages of deep neural networks (DNNs) for achieving remarkable performance in motion and music tasks with little effort for feature pre-processing. However, applying DNNs for generating dance to a piece of music is nevertheless challenging, because of 1) DNNs need to generate large sequences while mapping the music input, 2) the DNN needs to constraint the motion beat to the music, and 3) DNNs require a considerable amount of hand-crafted data. In this study, we propose a weakly supervised deep recurrent method for real-time basic dance generation with audio power spectrum as input. The proposed model employs convolutional layers and a multilayered Long Short-Term memory (LSTM) to process the audio input. Then, another deep LSTM layer decodes the target dance sequence. Notably, this end-to-end approach has 1) an auto-conditioned decode configuration that reduces accumulation of feedback error of large dance sequence, 2) uses a contrastive cost function to regulate the mapping between the music and motion beat, and 3) trains with weak labels generated from the motion beat, reducing the amount of hand-crafted data. We evaluate the proposed network based on i) the similarities between generated and the baseline dancer motion with a cross entropy measure for large dance sequences, and ii) accurate timing between the music and motion beat with an F-measure. Experimental results revealed that, after training using a small dataset, the model generates basic dance steps with low cross entropy and maintains an F-measure score similar to that of a baseline dancer.

KW - Contrastive loss

KW - Dance generation

KW - Deep recurrent networks

UR - http://www.scopus.com/inward/record.url?scp=85073227555&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85073227555&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2019.8851872

DO - 10.1109/IJCNN.2019.8851872

M3 - Conference contribution

AN - SCOPUS:85073227555

T3 - Proceedings of the International Joint Conference on Neural Networks

BT - 2019 International Joint Conference on Neural Networks, IJCNN 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -