Expressive humanoid robot for automatic accompaniment

Guangyu Xia, Mao Kawai, Kei Matsuki, Mutian Fu, Sarah Cosentino, Gabriele Trovato, Roger Dannenberg, Salvatore Sessa, Atsuo Takanishi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a music-robotic system capable of performing an accompaniment for a musician and reacting to human performance with gestural and facial expression in real time. This work can be seen as a marriage between social robotics and computer accompaniment systems in order to create more musical, interactive, and engaging performances between humans and machines. We also conduct subjective evaluations on audiences to validate the joint effects of robot expression and automatic accompaniment. Our results show that robot embodiment and expression improve the subjective ratings on automatic accompaniment significantly. Counterintuitively, such improvement does not exist when the machine is performing a fixed sequence and the human musician simply follows the machine. As far as we know, this is the first interactive music performance between a human musician and a humanoid music robot with systematic subjective evaluation.

Original languageEnglish
Title of host publicationSMC 2016 - 13th Sound and Music Computing Conference, Proceedings
EditorsRolf Grossmann, Georg Hajdu
PublisherZentrum fur Mikrotonale Musik und Multimediale Komposition (ZM4), Hochschule fur Musik und Theater
Pages506-511
Number of pages6
ISBN (Electronic)9783000537004
Publication statusPublished - 2019
Event13th Sound and Music Computing Conference, SMC 2016 - Hamburg, Germany
Duration: 2019 Aug 312019 Sep 3

Publication series

NameSMC 2016 - 13th Sound and Music Computing Conference, Proceedings

Conference

Conference13th Sound and Music Computing Conference, SMC 2016
CountryGermany
CityHamburg
Period19/8/3119/9/3

ASJC Scopus subject areas

  • Music
  • Computer Science Applications
  • Media Technology

Fingerprint Dive into the research topics of 'Expressive humanoid robot for automatic accompaniment'. Together they form a unique fingerprint.

  • Cite this

    Xia, G., Kawai, M., Matsuki, K., Fu, M., Cosentino, S., Trovato, G., Dannenberg, R., Sessa, S., & Takanishi, A. (2019). Expressive humanoid robot for automatic accompaniment. In R. Grossmann, & G. Hajdu (Eds.), SMC 2016 - 13th Sound and Music Computing Conference, Proceedings (pp. 506-511). (SMC 2016 - 13th Sound and Music Computing Conference, Proceedings). Zentrum fur Mikrotonale Musik und Multimediale Komposition (ZM4), Hochschule fur Musik und Theater.