Musicmean

Fusion-based music generation

Tatsunori Hirai, Shoto Sasaki, Shigeo Morishima

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

In this paper, we propose MusicMean, a system that fuses existing songs to create an "in-between song" such as an "average song," by calculating the average acoustic pitch of musical notes and the occurrence frequency of drum elements from multiple MIDI songs. We generate an inbetween song for generative music by defining rules based on simple music theory. The system realizes the interactive generation of in-between songs. This represents new interaction between human and digital content. Using MusicMean, users can create personalized songs by fusing their favorite songs.

Original languageEnglish
Title of host publicationProceedings of the 12th International Conference in Sound and Music Computing, SMC 2015
PublisherMusic Technology Research Group, Department of Computer Science, Maynooth University
Pages323-327
Number of pages5
ISBN (Electronic)9780992746629
Publication statusPublished - 2015
Externally publishedYes
Event12th International Conference on Sound and Music Computing, SMC 2015 - Maynooth, Ireland
Duration: 2015 Jul 302015 Aug 1

Other

Other12th International Conference on Sound and Music Computing, SMC 2015
CountryIreland
CityMaynooth
Period15/7/3015/8/1

Fingerprint

Electric fuses
Fusion reactions
Acoustics
Fusion
Song
Music

ASJC Scopus subject areas

  • Music
  • Computer Science Applications
  • Media Technology

Cite this

Hirai, T., Sasaki, S., & Morishima, S. (2015). Musicmean: Fusion-based music generation. In Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015 (pp. 323-327). Music Technology Research Group, Department of Computer Science, Maynooth University.

Musicmean : Fusion-based music generation. / Hirai, Tatsunori; Sasaki, Shoto; Morishima, Shigeo.

Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015. Music Technology Research Group, Department of Computer Science, Maynooth University, 2015. p. 323-327.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Hirai, T, Sasaki, S & Morishima, S 2015, Musicmean: Fusion-based music generation. in Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015. Music Technology Research Group, Department of Computer Science, Maynooth University, pp. 323-327, 12th International Conference on Sound and Music Computing, SMC 2015, Maynooth, Ireland, 15/7/30.
Hirai T, Sasaki S, Morishima S. Musicmean: Fusion-based music generation. In Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015. Music Technology Research Group, Department of Computer Science, Maynooth University. 2015. p. 323-327
Hirai, Tatsunori ; Sasaki, Shoto ; Morishima, Shigeo. / Musicmean : Fusion-based music generation. Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015. Music Technology Research Group, Department of Computer Science, Maynooth University, 2015. pp. 323-327
@inproceedings{45adff18e32a4b2f8b5cb50c48dc806b,
title = "Musicmean: Fusion-based music generation",
abstract = "In this paper, we propose MusicMean, a system that fuses existing songs to create an {"}in-between song{"} such as an {"}average song,{"} by calculating the average acoustic pitch of musical notes and the occurrence frequency of drum elements from multiple MIDI songs. We generate an inbetween song for generative music by defining rules based on simple music theory. The system realizes the interactive generation of in-between songs. This represents new interaction between human and digital content. Using MusicMean, users can create personalized songs by fusing their favorite songs.",
author = "Tatsunori Hirai and Shoto Sasaki and Shigeo Morishima",
year = "2015",
language = "English",
pages = "323--327",
booktitle = "Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015",
publisher = "Music Technology Research Group, Department of Computer Science, Maynooth University",

}

TY - GEN

T1 - Musicmean

T2 - Fusion-based music generation

AU - Hirai, Tatsunori

AU - Sasaki, Shoto

AU - Morishima, Shigeo

PY - 2015

Y1 - 2015

N2 - In this paper, we propose MusicMean, a system that fuses existing songs to create an "in-between song" such as an "average song," by calculating the average acoustic pitch of musical notes and the occurrence frequency of drum elements from multiple MIDI songs. We generate an inbetween song for generative music by defining rules based on simple music theory. The system realizes the interactive generation of in-between songs. This represents new interaction between human and digital content. Using MusicMean, users can create personalized songs by fusing their favorite songs.

AB - In this paper, we propose MusicMean, a system that fuses existing songs to create an "in-between song" such as an "average song," by calculating the average acoustic pitch of musical notes and the occurrence frequency of drum elements from multiple MIDI songs. We generate an inbetween song for generative music by defining rules based on simple music theory. The system realizes the interactive generation of in-between songs. This represents new interaction between human and digital content. Using MusicMean, users can create personalized songs by fusing their favorite songs.

UR - http://www.scopus.com/inward/record.url?scp=84988457294&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84988457294&partnerID=8YFLogxK

M3 - Conference contribution

SP - 323

EP - 327

BT - Proceedings of the 12th International Conference in Sound and Music Computing, SMC 2015

PB - Music Technology Research Group, Department of Computer Science, Maynooth University

ER -