Dictation of Multiparty Conversation Considering Speaker Individuality and Turn Taking

Noriyuki Murai, Tetsunori Kobayashi

    Research output: Contribution to journalArticle

    2 Citations (Scopus)

    Abstract

    This paper discusses an algorithm that recognizes multiparty speech with complex turn taking. In recognition of the conversation of multiple speakers, it is necessary to know not only what is spoken, as in the conventional system, but also who spoke up to what point. The purpose of this paper is to find a method to solve this problem. The representation of the likelihood of turn taking is included in the language model in the continuous speech recognition system, and the speech properties of each speaker are represented by a statistical model. Using this approach, two algorithms are proposed that estimate simultaneously and in parallel the speaker and the speech content. Recognition experiments using conversation in TV sports news show that the proposed method can correct a maximum of 29.5% of the errors in the recognition of speech content and 93.0% of the errors in recognition of the speaker.

    Original languageEnglish
    Pages (from-to)103-111
    Number of pages9
    JournalSystems and Computers in Japan
    Volume34
    Issue number13
    DOIs
    Publication statusPublished - 2003 Nov 30

      Fingerprint

    Keywords

    • GMM
    • MLLR
    • Multiparty conversation
    • Speaker individuality
    • Statistical turn taking model

    ASJC Scopus subject areas

    • Hardware and Architecture
    • Information Systems
    • Theoretical Computer Science
    • Computational Theory and Mathematics

    Cite this