Speaker verification robust to talking style variation using multiple kernel learning based on conditional entropy minimization

    研究成果: Conference contribution

    1 引用 (Scopus)

    抜粋

    We developed a new speaker verification system that is robust to intra-speaker variation. There is a strong likelihood that intra-speaker variations will occur due to changes in talking styles, the periods when an individual speaks, and so on. It is well known that such variation generally degrades the performance of speaker verification systems. To solve this problem, we applied multiple kernel learning (MKL) based on conditional entropy minimization, which impose the data to be compactly aggregated for each speaker class and ensure that the different speaker classes were far apart from each other. Experimental results showed that the proposed speaker verification system achieved a robust performance to intra-speaker variation derived from changes in the talking styles compared to the conventional maximum margin-based system.

    元の言語English
    ホスト出版物のタイトルProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
    ページ2741-2744
    ページ数4
    出版物ステータスPublished - 2011
    イベント12th Annual Conference of the International Speech Communication Association, INTERSPEECH 2011 - Florence, Italy
    継続期間: 2011 8 272011 8 31

    Other

    Other12th Annual Conference of the International Speech Communication Association, INTERSPEECH 2011
    Italy
    Florence
    期間11/8/2711/8/31

      フィンガープリント

    ASJC Scopus subject areas

    • Language and Linguistics
    • Human-Computer Interaction
    • Signal Processing
    • Software
    • Modelling and Simulation

    これを引用

    Ogawa, T., Hino, H., Murata, N., & Kobayashi, T. (2011). Speaker verification robust to talking style variation using multiple kernel learning based on conditional entropy minimization. : Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH (pp. 2741-2744)