Dancereproducer: An automatic mashup music video generation system by reusing dance video clips on the web

Tomoyasu Nakano, Sora Murofushi, Masataka Goto, Shigeo Morishima

研究成果: Paper査読

14 被引用数 (Scopus)

抄録

We propose a dance video authoring system, DanceReProducer, that can automatically generate a dance video clip appropriate to a given piece of music by segmenting and concatenating existing dance video clips. In this paper, we focus on the reuse of ever-increasing user-generated dance video clips on a video sharing web service. In a video clip consisting of music (audio signals) and image sequences (video frames), the image sequences are often synchronized with or related to the music. Such relationships are diverse in different video clips, but were not dealt with by previous methods for automatic music video generation. Our system employs machine learning and beat tracking techniques to model these relationships. To generate new music video clips, short image sequences that have been previously extracted from other music clips are stretched and concatenated so that the emerging image sequence matches the rhythmic structure of the target song. Besides automatically generating music videos, DanceReProducer offers a user interface in which a user can interactively change image sequences just by choosing different candidates. This way people with little knowledge or experience in MAD movie generation can interactively create personalized video clips.

本文言語English
出版ステータスPublished - 2011 1月 1
イベント8th Sound and Music Computing Conference, SMC 2011 - Padova, Italy
継続期間: 2011 7月 62011 7月 9

Conference

Conference8th Sound and Music Computing Conference, SMC 2011
国/地域Italy
CityPadova
Period11/7/611/7/9

ASJC Scopus subject areas

  • コンピュータ サイエンス(全般)

フィンガープリント

「Dancereproducer: An automatic mashup music video generation system by reusing dance video clips on the web」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル