Implementation of a musical performance interaction system for the waseda flutist robot

Combining visual and acoustic sensor input based on sequential bayesian filtering

Klaus Petersen, Jorge Solis, Atsuo Takanishi

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    1 Citation (Scopus)

    Abstract

    The flutist robot WF-4RIV at Waseda University is able to play the flute at the level of an intermediate human player. So far the robot has been able to play in a statically sequenced duet with another musician, individually communicating only by keeping eye-contact. To extend the interactive capabilities of the flutist robot, we have in previous publications described the implementation of a Music-based Interaction System (MbIS). The purpose of this system is to combine information from the robot's visual and aural sensor input signal processing systems to enable musical communication with a partner musician. In this paper we focus on that part of the MbIS that is responsible for mapping the information from the sensor processing system to generate meaningful modulation of the musical output of the robot. We propose a two skill level approach to enable musicians of different ability levels to interact with the robot. When interacting with the flutist robot the device's physical capabilities / limitations need to be taken into account. In the beginner level interaction system the user's input to the robot is filtered in order to adjust it to the state of the robot's breathing system. The advanced level stage uses both the aural and visual sensor processing information. In a teaching phase the musician teaches the robot a tone sequence (by actually performing the sequence) that he relates to a certain instrument movement. In a performance phase, the musician can trigger these taught sequences by performing the according movements. Experiments to validate the functionality of the MbIS approach have been performed and the results are presented in this paper.

    Original languageEnglish
    Title of host publicationIEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings
    Pages2283-2288
    Number of pages6
    DOIs
    Publication statusPublished - 2010
    Event23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Taipei
    Duration: 2010 Oct 182010 Oct 22

    Other

    Other23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010
    CityTaipei
    Period10/10/1810/10/22

    Fingerprint

    Acoustics
    Robots
    Sensors
    Signal processing
    Teaching
    Modulation
    Communication
    Processing

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Human-Computer Interaction
    • Control and Systems Engineering

    Cite this

    Petersen, K., Solis, J., & Takanishi, A. (2010). Implementation of a musical performance interaction system for the waseda flutist robot: Combining visual and acoustic sensor input based on sequential bayesian filtering. In IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings (pp. 2283-2288). [5652576] https://doi.org/10.1109/IROS.2010.5652576

    Implementation of a musical performance interaction system for the waseda flutist robot : Combining visual and acoustic sensor input based on sequential bayesian filtering. / Petersen, Klaus; Solis, Jorge; Takanishi, Atsuo.

    IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings. 2010. p. 2283-2288 5652576.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Petersen, K, Solis, J & Takanishi, A 2010, Implementation of a musical performance interaction system for the waseda flutist robot: Combining visual and acoustic sensor input based on sequential bayesian filtering. in IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings., 5652576, pp. 2283-2288, 23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010, Taipei, 10/10/18. https://doi.org/10.1109/IROS.2010.5652576
    Petersen K, Solis J, Takanishi A. Implementation of a musical performance interaction system for the waseda flutist robot: Combining visual and acoustic sensor input based on sequential bayesian filtering. In IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings. 2010. p. 2283-2288. 5652576 https://doi.org/10.1109/IROS.2010.5652576
    Petersen, Klaus ; Solis, Jorge ; Takanishi, Atsuo. / Implementation of a musical performance interaction system for the waseda flutist robot : Combining visual and acoustic sensor input based on sequential bayesian filtering. IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings. 2010. pp. 2283-2288
    @inproceedings{71f96a6644f34abe87e90ba0cda7b34a,
    title = "Implementation of a musical performance interaction system for the waseda flutist robot: Combining visual and acoustic sensor input based on sequential bayesian filtering",
    abstract = "The flutist robot WF-4RIV at Waseda University is able to play the flute at the level of an intermediate human player. So far the robot has been able to play in a statically sequenced duet with another musician, individually communicating only by keeping eye-contact. To extend the interactive capabilities of the flutist robot, we have in previous publications described the implementation of a Music-based Interaction System (MbIS). The purpose of this system is to combine information from the robot's visual and aural sensor input signal processing systems to enable musical communication with a partner musician. In this paper we focus on that part of the MbIS that is responsible for mapping the information from the sensor processing system to generate meaningful modulation of the musical output of the robot. We propose a two skill level approach to enable musicians of different ability levels to interact with the robot. When interacting with the flutist robot the device's physical capabilities / limitations need to be taken into account. In the beginner level interaction system the user's input to the robot is filtered in order to adjust it to the state of the robot's breathing system. The advanced level stage uses both the aural and visual sensor processing information. In a teaching phase the musician teaches the robot a tone sequence (by actually performing the sequence) that he relates to a certain instrument movement. In a performance phase, the musician can trigger these taught sequences by performing the according movements. Experiments to validate the functionality of the MbIS approach have been performed and the results are presented in this paper.",
    author = "Klaus Petersen and Jorge Solis and Atsuo Takanishi",
    year = "2010",
    doi = "10.1109/IROS.2010.5652576",
    language = "English",
    isbn = "9781424466757",
    pages = "2283--2288",
    booktitle = "IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings",

    }

    TY - GEN

    T1 - Implementation of a musical performance interaction system for the waseda flutist robot

    T2 - Combining visual and acoustic sensor input based on sequential bayesian filtering

    AU - Petersen, Klaus

    AU - Solis, Jorge

    AU - Takanishi, Atsuo

    PY - 2010

    Y1 - 2010

    N2 - The flutist robot WF-4RIV at Waseda University is able to play the flute at the level of an intermediate human player. So far the robot has been able to play in a statically sequenced duet with another musician, individually communicating only by keeping eye-contact. To extend the interactive capabilities of the flutist robot, we have in previous publications described the implementation of a Music-based Interaction System (MbIS). The purpose of this system is to combine information from the robot's visual and aural sensor input signal processing systems to enable musical communication with a partner musician. In this paper we focus on that part of the MbIS that is responsible for mapping the information from the sensor processing system to generate meaningful modulation of the musical output of the robot. We propose a two skill level approach to enable musicians of different ability levels to interact with the robot. When interacting with the flutist robot the device's physical capabilities / limitations need to be taken into account. In the beginner level interaction system the user's input to the robot is filtered in order to adjust it to the state of the robot's breathing system. The advanced level stage uses both the aural and visual sensor processing information. In a teaching phase the musician teaches the robot a tone sequence (by actually performing the sequence) that he relates to a certain instrument movement. In a performance phase, the musician can trigger these taught sequences by performing the according movements. Experiments to validate the functionality of the MbIS approach have been performed and the results are presented in this paper.

    AB - The flutist robot WF-4RIV at Waseda University is able to play the flute at the level of an intermediate human player. So far the robot has been able to play in a statically sequenced duet with another musician, individually communicating only by keeping eye-contact. To extend the interactive capabilities of the flutist robot, we have in previous publications described the implementation of a Music-based Interaction System (MbIS). The purpose of this system is to combine information from the robot's visual and aural sensor input signal processing systems to enable musical communication with a partner musician. In this paper we focus on that part of the MbIS that is responsible for mapping the information from the sensor processing system to generate meaningful modulation of the musical output of the robot. We propose a two skill level approach to enable musicians of different ability levels to interact with the robot. When interacting with the flutist robot the device's physical capabilities / limitations need to be taken into account. In the beginner level interaction system the user's input to the robot is filtered in order to adjust it to the state of the robot's breathing system. The advanced level stage uses both the aural and visual sensor processing information. In a teaching phase the musician teaches the robot a tone sequence (by actually performing the sequence) that he relates to a certain instrument movement. In a performance phase, the musician can trigger these taught sequences by performing the according movements. Experiments to validate the functionality of the MbIS approach have been performed and the results are presented in this paper.

    UR - http://www.scopus.com/inward/record.url?scp=78651480614&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=78651480614&partnerID=8YFLogxK

    U2 - 10.1109/IROS.2010.5652576

    DO - 10.1109/IROS.2010.5652576

    M3 - Conference contribution

    SN - 9781424466757

    SP - 2283

    EP - 2288

    BT - IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings

    ER -