Development of a robot quizmaster with auditory functions for speech-based multiparty interaction

Izaya Nishimuta, Kazuyoshi Yoshii, Katsutoshi Itoyama, Hiroshi G. Okuno

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    4 Citations (Scopus)

    Abstract

    This paper presents a robot quizmaster that has auditory functions (i.e., ears) for moderating a multiplayer quiz game. The most basic form of oral interaction in a quiz game is that a quizmaster reads aloud a question, and each player is allowed to answer it whenever the answer comes to his or her mind. A critical problem in such oral interaction is that if multiple players speak almost simultaneously for answering, it is difficult for a 'human' quizmaster to recognize overlapping answers and judge the correctness of each answer. To avoid this problem, players have conventionally been required to push a button, raise a hand, or say 'Yes' to just get a right to answer a question before doing it. This requirement, however, inhibits natural oral interaction. In this paper we propose a 'robot' quizmaster that can identify a player who correctly answers a question first, even when multiple players utter answers almost at the same time. Since our robot uses its own microphones (ears) embedded in the head, individual players are not required to wear small pin microphones close to their mouths. To localize, separate, and recognize overlapping utterances captured by the ears, we use a robot audition software called HARK and an automatic speech recognizer called Julius. Experimental results showed the effectiveness of our approach.

    Original languageEnglish
    Title of host publication2014 IEEE/SICE International Symposium on System Integration, SII 2014
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages328-333
    Number of pages6
    ISBN (Print)9781479969449
    DOIs
    Publication statusPublished - 2014 Jan 30
    Event7th IEEE/SICE International Symposium on System Integration, SII 2014 - Tokyo, Japan
    Duration: 2014 Dec 132014 Dec 15

    Other

    Other7th IEEE/SICE International Symposium on System Integration, SII 2014
    CountryJapan
    CityTokyo
    Period14/12/1314/12/15

    Fingerprint

    Robots
    Microphones
    Audition
    Wear of materials

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Computer Networks and Communications
    • Information Systems

    Cite this

    Nishimuta, I., Yoshii, K., Itoyama, K., & Okuno, H. G. (2014). Development of a robot quizmaster with auditory functions for speech-based multiparty interaction. In 2014 IEEE/SICE International Symposium on System Integration, SII 2014 (pp. 328-333). [7028059] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/SII.2014.7028059

    Development of a robot quizmaster with auditory functions for speech-based multiparty interaction. / Nishimuta, Izaya; Yoshii, Kazuyoshi; Itoyama, Katsutoshi; Okuno, Hiroshi G.

    2014 IEEE/SICE International Symposium on System Integration, SII 2014. Institute of Electrical and Electronics Engineers Inc., 2014. p. 328-333 7028059.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Nishimuta, I, Yoshii, K, Itoyama, K & Okuno, HG 2014, Development of a robot quizmaster with auditory functions for speech-based multiparty interaction. in 2014 IEEE/SICE International Symposium on System Integration, SII 2014., 7028059, Institute of Electrical and Electronics Engineers Inc., pp. 328-333, 7th IEEE/SICE International Symposium on System Integration, SII 2014, Tokyo, Japan, 14/12/13. https://doi.org/10.1109/SII.2014.7028059
    Nishimuta I, Yoshii K, Itoyama K, Okuno HG. Development of a robot quizmaster with auditory functions for speech-based multiparty interaction. In 2014 IEEE/SICE International Symposium on System Integration, SII 2014. Institute of Electrical and Electronics Engineers Inc. 2014. p. 328-333. 7028059 https://doi.org/10.1109/SII.2014.7028059
    Nishimuta, Izaya ; Yoshii, Kazuyoshi ; Itoyama, Katsutoshi ; Okuno, Hiroshi G. / Development of a robot quizmaster with auditory functions for speech-based multiparty interaction. 2014 IEEE/SICE International Symposium on System Integration, SII 2014. Institute of Electrical and Electronics Engineers Inc., 2014. pp. 328-333
    @inproceedings{c0124534f391452e86a687710db389d5,
    title = "Development of a robot quizmaster with auditory functions for speech-based multiparty interaction",
    abstract = "This paper presents a robot quizmaster that has auditory functions (i.e., ears) for moderating a multiplayer quiz game. The most basic form of oral interaction in a quiz game is that a quizmaster reads aloud a question, and each player is allowed to answer it whenever the answer comes to his or her mind. A critical problem in such oral interaction is that if multiple players speak almost simultaneously for answering, it is difficult for a 'human' quizmaster to recognize overlapping answers and judge the correctness of each answer. To avoid this problem, players have conventionally been required to push a button, raise a hand, or say 'Yes' to just get a right to answer a question before doing it. This requirement, however, inhibits natural oral interaction. In this paper we propose a 'robot' quizmaster that can identify a player who correctly answers a question first, even when multiple players utter answers almost at the same time. Since our robot uses its own microphones (ears) embedded in the head, individual players are not required to wear small pin microphones close to their mouths. To localize, separate, and recognize overlapping utterances captured by the ears, we use a robot audition software called HARK and an automatic speech recognizer called Julius. Experimental results showed the effectiveness of our approach.",
    author = "Izaya Nishimuta and Kazuyoshi Yoshii and Katsutoshi Itoyama and Okuno, {Hiroshi G.}",
    year = "2014",
    month = "1",
    day = "30",
    doi = "10.1109/SII.2014.7028059",
    language = "English",
    isbn = "9781479969449",
    pages = "328--333",
    booktitle = "2014 IEEE/SICE International Symposium on System Integration, SII 2014",
    publisher = "Institute of Electrical and Electronics Engineers Inc.",

    }

    TY - GEN

    T1 - Development of a robot quizmaster with auditory functions for speech-based multiparty interaction

    AU - Nishimuta, Izaya

    AU - Yoshii, Kazuyoshi

    AU - Itoyama, Katsutoshi

    AU - Okuno, Hiroshi G.

    PY - 2014/1/30

    Y1 - 2014/1/30

    N2 - This paper presents a robot quizmaster that has auditory functions (i.e., ears) for moderating a multiplayer quiz game. The most basic form of oral interaction in a quiz game is that a quizmaster reads aloud a question, and each player is allowed to answer it whenever the answer comes to his or her mind. A critical problem in such oral interaction is that if multiple players speak almost simultaneously for answering, it is difficult for a 'human' quizmaster to recognize overlapping answers and judge the correctness of each answer. To avoid this problem, players have conventionally been required to push a button, raise a hand, or say 'Yes' to just get a right to answer a question before doing it. This requirement, however, inhibits natural oral interaction. In this paper we propose a 'robot' quizmaster that can identify a player who correctly answers a question first, even when multiple players utter answers almost at the same time. Since our robot uses its own microphones (ears) embedded in the head, individual players are not required to wear small pin microphones close to their mouths. To localize, separate, and recognize overlapping utterances captured by the ears, we use a robot audition software called HARK and an automatic speech recognizer called Julius. Experimental results showed the effectiveness of our approach.

    AB - This paper presents a robot quizmaster that has auditory functions (i.e., ears) for moderating a multiplayer quiz game. The most basic form of oral interaction in a quiz game is that a quizmaster reads aloud a question, and each player is allowed to answer it whenever the answer comes to his or her mind. A critical problem in such oral interaction is that if multiple players speak almost simultaneously for answering, it is difficult for a 'human' quizmaster to recognize overlapping answers and judge the correctness of each answer. To avoid this problem, players have conventionally been required to push a button, raise a hand, or say 'Yes' to just get a right to answer a question before doing it. This requirement, however, inhibits natural oral interaction. In this paper we propose a 'robot' quizmaster that can identify a player who correctly answers a question first, even when multiple players utter answers almost at the same time. Since our robot uses its own microphones (ears) embedded in the head, individual players are not required to wear small pin microphones close to their mouths. To localize, separate, and recognize overlapping utterances captured by the ears, we use a robot audition software called HARK and an automatic speech recognizer called Julius. Experimental results showed the effectiveness of our approach.

    UR - http://www.scopus.com/inward/record.url?scp=84946686012&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84946686012&partnerID=8YFLogxK

    U2 - 10.1109/SII.2014.7028059

    DO - 10.1109/SII.2014.7028059

    M3 - Conference contribution

    AN - SCOPUS:84946686012

    SN - 9781479969449

    SP - 328

    EP - 333

    BT - 2014 IEEE/SICE International Symposium on System Integration, SII 2014

    PB - Institute of Electrical and Electronics Engineers Inc.

    ER -