Subspace pursuit method for kernel-log-linear models

Yotaro Kubo, Simon Wiesler, Ralf Schlueter, Hermann Ney, Shinji Watanabe, Atsushi Nakamura, Tetsunori Kobayashi

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    5 Citations (Scopus)

    Abstract

    This paper presents a novel method for reducing the dimensionality of kernel spaces. Recently, to maintain the convexity of training, log-linear models without mixtures have been used as emission probability density functions in hidden Markov models for automatic speech recognition. In that framework, nonlinearly-transformed high-dimensional features are used to achieve the nonlinear classification of the original observation vectors without using mixtures. In this paper, with the goal of using high-dimensional features in kernel spaces, the cutting plane subspace pursuit method proposed for support vector machines is generalized and applied to log-linear models. The experimental results show that the proposed method achieved an efficient approximation of the feature space by using a limited number of basis vectors.

    Original languageEnglish
    Title of host publicationICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
    Pages4500-4503
    Number of pages4
    DOIs
    Publication statusPublished - 2011
    Event36th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011 - Prague
    Duration: 2011 May 222011 May 27

    Other

    Other36th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011
    CityPrague
    Period11/5/2211/5/27

    Fingerprint

    Hidden Markov models
    Speech recognition
    Probability density function
    Support vector machines

    Keywords

    • Automatic speech recognition
    • dimensionality reduction
    • kernel method
    • log-linear model
    • subspace method

    ASJC Scopus subject areas

    • Signal Processing
    • Software
    • Electrical and Electronic Engineering

    Cite this

    Kubo, Y., Wiesler, S., Schlueter, R., Ney, H., Watanabe, S., Nakamura, A., & Kobayashi, T. (2011). Subspace pursuit method for kernel-log-linear models. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (pp. 4500-4503). [5947354] https://doi.org/10.1109/ICASSP.2011.5947354

    Subspace pursuit method for kernel-log-linear models. / Kubo, Yotaro; Wiesler, Simon; Schlueter, Ralf; Ney, Hermann; Watanabe, Shinji; Nakamura, Atsushi; Kobayashi, Tetsunori.

    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2011. p. 4500-4503 5947354.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Kubo, Y, Wiesler, S, Schlueter, R, Ney, H, Watanabe, S, Nakamura, A & Kobayashi, T 2011, Subspace pursuit method for kernel-log-linear models. in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings., 5947354, pp. 4500-4503, 36th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011, Prague, 11/5/22. https://doi.org/10.1109/ICASSP.2011.5947354
    Kubo Y, Wiesler S, Schlueter R, Ney H, Watanabe S, Nakamura A et al. Subspace pursuit method for kernel-log-linear models. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2011. p. 4500-4503. 5947354 https://doi.org/10.1109/ICASSP.2011.5947354
    Kubo, Yotaro ; Wiesler, Simon ; Schlueter, Ralf ; Ney, Hermann ; Watanabe, Shinji ; Nakamura, Atsushi ; Kobayashi, Tetsunori. / Subspace pursuit method for kernel-log-linear models. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. 2011. pp. 4500-4503
    @inproceedings{a54bcee6a25e4220819d81d208d0db2c,
    title = "Subspace pursuit method for kernel-log-linear models",
    abstract = "This paper presents a novel method for reducing the dimensionality of kernel spaces. Recently, to maintain the convexity of training, log-linear models without mixtures have been used as emission probability density functions in hidden Markov models for automatic speech recognition. In that framework, nonlinearly-transformed high-dimensional features are used to achieve the nonlinear classification of the original observation vectors without using mixtures. In this paper, with the goal of using high-dimensional features in kernel spaces, the cutting plane subspace pursuit method proposed for support vector machines is generalized and applied to log-linear models. The experimental results show that the proposed method achieved an efficient approximation of the feature space by using a limited number of basis vectors.",
    keywords = "Automatic speech recognition, dimensionality reduction, kernel method, log-linear model, subspace method",
    author = "Yotaro Kubo and Simon Wiesler and Ralf Schlueter and Hermann Ney and Shinji Watanabe and Atsushi Nakamura and Tetsunori Kobayashi",
    year = "2011",
    doi = "10.1109/ICASSP.2011.5947354",
    language = "English",
    isbn = "9781457705397",
    pages = "4500--4503",
    booktitle = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings",

    }

    TY - GEN

    T1 - Subspace pursuit method for kernel-log-linear models

    AU - Kubo, Yotaro

    AU - Wiesler, Simon

    AU - Schlueter, Ralf

    AU - Ney, Hermann

    AU - Watanabe, Shinji

    AU - Nakamura, Atsushi

    AU - Kobayashi, Tetsunori

    PY - 2011

    Y1 - 2011

    N2 - This paper presents a novel method for reducing the dimensionality of kernel spaces. Recently, to maintain the convexity of training, log-linear models without mixtures have been used as emission probability density functions in hidden Markov models for automatic speech recognition. In that framework, nonlinearly-transformed high-dimensional features are used to achieve the nonlinear classification of the original observation vectors without using mixtures. In this paper, with the goal of using high-dimensional features in kernel spaces, the cutting plane subspace pursuit method proposed for support vector machines is generalized and applied to log-linear models. The experimental results show that the proposed method achieved an efficient approximation of the feature space by using a limited number of basis vectors.

    AB - This paper presents a novel method for reducing the dimensionality of kernel spaces. Recently, to maintain the convexity of training, log-linear models without mixtures have been used as emission probability density functions in hidden Markov models for automatic speech recognition. In that framework, nonlinearly-transformed high-dimensional features are used to achieve the nonlinear classification of the original observation vectors without using mixtures. In this paper, with the goal of using high-dimensional features in kernel spaces, the cutting plane subspace pursuit method proposed for support vector machines is generalized and applied to log-linear models. The experimental results show that the proposed method achieved an efficient approximation of the feature space by using a limited number of basis vectors.

    KW - Automatic speech recognition

    KW - dimensionality reduction

    KW - kernel method

    KW - log-linear model

    KW - subspace method

    UR - http://www.scopus.com/inward/record.url?scp=80051616793&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=80051616793&partnerID=8YFLogxK

    U2 - 10.1109/ICASSP.2011.5947354

    DO - 10.1109/ICASSP.2011.5947354

    M3 - Conference contribution

    AN - SCOPUS:80051616793

    SN - 9781457705397

    SP - 4500

    EP - 4503

    BT - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

    ER -