Musicmean: Fusion-based music generation

Tatsunori Hirai, Shoto Sasaki, Shigeo Morishima

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

In this paper, we propose MusicMean, a system that fuses existing songs to create an "in-between song" such as an "average song," by calculating the average acoustic pitch of musical notes and the occurrence frequency of drum elements from multiple MIDI songs. We generate an inbetween song for generative music by defining rules based on simple music theory. The system realizes the interactive generation of in-between songs. This represents new interaction between human and digital content. Using MusicMean, users can create personalized songs by fusing their favorite songs.

Original languageEnglish
Title of host publicationProceedings of the 12th International Conference in Sound and Music Computing, SMC 2015
PublisherMusic Technology Research Group, Department of Computer Science, Maynooth University
Pages323-327
Number of pages5
ISBN (Electronic)9780992746629
Publication statusPublished - 2015
Externally publishedYes
Event12th International Conference on Sound and Music Computing, SMC 2015 - Maynooth, Ireland
Duration: 2015 Jul 302015 Aug 1

Other

Other12th International Conference on Sound and Music Computing, SMC 2015
CountryIreland
CityMaynooth
Period15/7/3015/8/1

ASJC Scopus subject areas

  • Music
  • Computer Science Applications
  • Media Technology

Fingerprint Dive into the research topics of 'Musicmean: Fusion-based music generation'. Together they form a unique fingerprint.

  • Cite this

    Hirai, T., Sasaki, S., & Morishima, S. (2015). Musicmean: Fusion-based music generation. In Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015 (pp. 323-327). Music Technology Research Group, Department of Computer Science, Maynooth University.