Dancereproducer: An automatic mashup music video generation system by reusing dance video clips on the web

Tomoyasu Nakano, Sora Murofushi, Masataka Goto, Shigeo Morishima

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    13 Citations (Scopus)

    Abstract

    We propose a dance video authoring system, DanceReProducer, that can automatically generate a dance video clip appropriate to a given piece of music by segmenting and concatenating existing dance video clips. In this paper, we focus on the reuse of ever-increasing user-generated dance video clips on a video sharing web service. In a video clip consisting of music (audio signals) and image sequences (video frames), the image sequences are often synchronized with or related to the music. Such relationships are diverse in different video clips, but were not dealt with by previous methods for automatic music video generation. Our system employs machine learning and beat tracking techniques to model these relationships. To generate new music video clips, short image sequences that have been previously extracted from other music clips are stretched and concatenated so that the emerging image sequence matches the rhythmic structure of the target song. Besides automatically generating music videos, DanceReProducer offers a user interface in which a user can interactively change image sequences just by choosing different candidates. This way people with little knowledge or experience in MAD movie generation can interactively create personalized video clips.

    Original languageEnglish
    Title of host publicationProceedings of the 8th Sound and Music Computing Conference, SMC 2011
    PublisherSound and music Computing network
    Publication statusPublished - 2011
    Event8th Sound and Music Computing Conference, SMC 2011 - Padova
    Duration: 2011 Jul 62011 Jul 9

    Other

    Other8th Sound and Music Computing Conference, SMC 2011
    CityPadova
    Period11/7/611/7/9

    Fingerprint

    Web services
    User interfaces
    Learning systems

    ASJC Scopus subject areas

    • Computer Science(all)

    Cite this

    Nakano, T., Murofushi, S., Goto, M., & Morishima, S. (2011). Dancereproducer: An automatic mashup music video generation system by reusing dance video clips on the web. In Proceedings of the 8th Sound and Music Computing Conference, SMC 2011 Sound and music Computing network.

    Dancereproducer : An automatic mashup music video generation system by reusing dance video clips on the web. / Nakano, Tomoyasu; Murofushi, Sora; Goto, Masataka; Morishima, Shigeo.

    Proceedings of the 8th Sound and Music Computing Conference, SMC 2011. Sound and music Computing network, 2011.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Nakano, T, Murofushi, S, Goto, M & Morishima, S 2011, Dancereproducer: An automatic mashup music video generation system by reusing dance video clips on the web. in Proceedings of the 8th Sound and Music Computing Conference, SMC 2011. Sound and music Computing network, 8th Sound and Music Computing Conference, SMC 2011, Padova, 11/7/6.
    Nakano T, Murofushi S, Goto M, Morishima S. Dancereproducer: An automatic mashup music video generation system by reusing dance video clips on the web. In Proceedings of the 8th Sound and Music Computing Conference, SMC 2011. Sound and music Computing network. 2011
    Nakano, Tomoyasu ; Murofushi, Sora ; Goto, Masataka ; Morishima, Shigeo. / Dancereproducer : An automatic mashup music video generation system by reusing dance video clips on the web. Proceedings of the 8th Sound and Music Computing Conference, SMC 2011. Sound and music Computing network, 2011.
    @inproceedings{cdc0fe2c4459426c92713694d2e6f145,
    title = "Dancereproducer: An automatic mashup music video generation system by reusing dance video clips on the web",
    abstract = "We propose a dance video authoring system, DanceReProducer, that can automatically generate a dance video clip appropriate to a given piece of music by segmenting and concatenating existing dance video clips. In this paper, we focus on the reuse of ever-increasing user-generated dance video clips on a video sharing web service. In a video clip consisting of music (audio signals) and image sequences (video frames), the image sequences are often synchronized with or related to the music. Such relationships are diverse in different video clips, but were not dealt with by previous methods for automatic music video generation. Our system employs machine learning and beat tracking techniques to model these relationships. To generate new music video clips, short image sequences that have been previously extracted from other music clips are stretched and concatenated so that the emerging image sequence matches the rhythmic structure of the target song. Besides automatically generating music videos, DanceReProducer offers a user interface in which a user can interactively change image sequences just by choosing different candidates. This way people with little knowledge or experience in MAD movie generation can interactively create personalized video clips.",
    author = "Tomoyasu Nakano and Sora Murofushi and Masataka Goto and Shigeo Morishima",
    year = "2011",
    language = "English",
    booktitle = "Proceedings of the 8th Sound and Music Computing Conference, SMC 2011",
    publisher = "Sound and music Computing network",

    }

    TY - GEN

    T1 - Dancereproducer

    T2 - An automatic mashup music video generation system by reusing dance video clips on the web

    AU - Nakano, Tomoyasu

    AU - Murofushi, Sora

    AU - Goto, Masataka

    AU - Morishima, Shigeo

    PY - 2011

    Y1 - 2011

    N2 - We propose a dance video authoring system, DanceReProducer, that can automatically generate a dance video clip appropriate to a given piece of music by segmenting and concatenating existing dance video clips. In this paper, we focus on the reuse of ever-increasing user-generated dance video clips on a video sharing web service. In a video clip consisting of music (audio signals) and image sequences (video frames), the image sequences are often synchronized with or related to the music. Such relationships are diverse in different video clips, but were not dealt with by previous methods for automatic music video generation. Our system employs machine learning and beat tracking techniques to model these relationships. To generate new music video clips, short image sequences that have been previously extracted from other music clips are stretched and concatenated so that the emerging image sequence matches the rhythmic structure of the target song. Besides automatically generating music videos, DanceReProducer offers a user interface in which a user can interactively change image sequences just by choosing different candidates. This way people with little knowledge or experience in MAD movie generation can interactively create personalized video clips.

    AB - We propose a dance video authoring system, DanceReProducer, that can automatically generate a dance video clip appropriate to a given piece of music by segmenting and concatenating existing dance video clips. In this paper, we focus on the reuse of ever-increasing user-generated dance video clips on a video sharing web service. In a video clip consisting of music (audio signals) and image sequences (video frames), the image sequences are often synchronized with or related to the music. Such relationships are diverse in different video clips, but were not dealt with by previous methods for automatic music video generation. Our system employs machine learning and beat tracking techniques to model these relationships. To generate new music video clips, short image sequences that have been previously extracted from other music clips are stretched and concatenated so that the emerging image sequence matches the rhythmic structure of the target song. Besides automatically generating music videos, DanceReProducer offers a user interface in which a user can interactively change image sequences just by choosing different candidates. This way people with little knowledge or experience in MAD movie generation can interactively create personalized video clips.

    UR - http://www.scopus.com/inward/record.url?scp=84905180329&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84905180329&partnerID=8YFLogxK

    M3 - Conference contribution

    AN - SCOPUS:84905180329

    BT - Proceedings of the 8th Sound and Music Computing Conference, SMC 2011

    PB - Sound and music Computing network

    ER -