We propose a dance video authoring system, DanceReProducer, that can automatically generate a dance video clip appropriate to a given piece of music by segmenting and concatenating existing dance video clips. In this paper, we focus on the reuse of ever-increasing user-generated dance video clips on a video sharing web service. In a video clip consisting of music (audio signals) and image sequences (video frames), the image sequences are often synchronized with or related to the music. Such relationships are diverse in different video clips, but were not dealt with by previous methods for automatic music video generation. Our system employs machine learning and beat tracking techniques to model these relationships. To generate new music video clips, short image sequences that have been previously extracted from other music clips are stretched and concatenated so that the emerging image sequence matches the rhythmic structure of the target song. Besides automatically generating music videos, DanceReProducer offers a user interface in which a user can interactively change image sequences just by choosing different candidates. This way people with little knowledge or experience in MAD movie generation can interactively create personalized video clips.
|出版ステータス||Published - 2011 1月 1|
|イベント||8th Sound and Music Computing Conference, SMC 2011 - Padova, Italy|
継続期間: 2011 7月 6 → 2011 7月 9
|Conference||8th Sound and Music Computing Conference, SMC 2011|
|Period||11/7/6 → 11/7/9|
ASJC Scopus subject areas
- コンピュータ サイエンス（全般）