Support vector machines with different norms: Motivation, formulations and results

João Pedro Pedroso, Noboru Murata

    Research output: Contribution to journalArticle

    28 Citations (Scopus)

    Abstract

    We introduce two formulations for training support vector machines, based on considering the L1 and L norms instead of the currently used L2 norm, and maximising the margin between the separating hyperplane and each data sets using L1 and L distances. We exploit the geometrical properties of these different norms, and propose what kind of results should be expected for them. Formulations in mathematical programming for linear problems corresponding to L1 and L norms are also provided, for both the separable and non-separable cases. We report results obtained for some standard benchmark problems, which confirmed that the performance of all the formulations is similar. As expected, the CPU time required for machines solvable with linear programming is much shorter.

    Original languageEnglish
    Pages (from-to)1263-1272
    Number of pages10
    JournalPattern Recognition Letters
    Volume22
    Issue number12
    DOIs
    Publication statusPublished - 2001

    Fingerprint

    Mathematical programming
    Linear programming
    Program processors
    Support vector machines

    Keywords

    • Linear programming
    • Support vector machines

    ASJC Scopus subject areas

    • Computer Vision and Pattern Recognition
    • Signal Processing
    • Electrical and Electronic Engineering

    Cite this

    Support vector machines with different norms : Motivation, formulations and results. / Pedroso, João Pedro; Murata, Noboru.

    In: Pattern Recognition Letters, Vol. 22, No. 12, 2001, p. 1263-1272.

    Research output: Contribution to journalArticle

    @article{d66f73a1c70846afa9755123bced6e67,
    title = "Support vector machines with different norms: Motivation, formulations and results",
    abstract = "We introduce two formulations for training support vector machines, based on considering the L1 and L∞ norms instead of the currently used L2 norm, and maximising the margin between the separating hyperplane and each data sets using L1 and L∞ distances. We exploit the geometrical properties of these different norms, and propose what kind of results should be expected for them. Formulations in mathematical programming for linear problems corresponding to L1 and L∞ norms are also provided, for both the separable and non-separable cases. We report results obtained for some standard benchmark problems, which confirmed that the performance of all the formulations is similar. As expected, the CPU time required for machines solvable with linear programming is much shorter.",
    keywords = "Linear programming, Support vector machines",
    author = "Pedroso, {Jo{\~a}o Pedro} and Noboru Murata",
    year = "2001",
    doi = "10.1016/S0167-8655(01)00071-X",
    language = "English",
    volume = "22",
    pages = "1263--1272",
    journal = "Pattern Recognition Letters",
    issn = "0167-8655",
    publisher = "Elsevier",
    number = "12",

    }

    TY - JOUR

    T1 - Support vector machines with different norms

    T2 - Motivation, formulations and results

    AU - Pedroso, João Pedro

    AU - Murata, Noboru

    PY - 2001

    Y1 - 2001

    N2 - We introduce two formulations for training support vector machines, based on considering the L1 and L∞ norms instead of the currently used L2 norm, and maximising the margin between the separating hyperplane and each data sets using L1 and L∞ distances. We exploit the geometrical properties of these different norms, and propose what kind of results should be expected for them. Formulations in mathematical programming for linear problems corresponding to L1 and L∞ norms are also provided, for both the separable and non-separable cases. We report results obtained for some standard benchmark problems, which confirmed that the performance of all the formulations is similar. As expected, the CPU time required for machines solvable with linear programming is much shorter.

    AB - We introduce two formulations for training support vector machines, based on considering the L1 and L∞ norms instead of the currently used L2 norm, and maximising the margin between the separating hyperplane and each data sets using L1 and L∞ distances. We exploit the geometrical properties of these different norms, and propose what kind of results should be expected for them. Formulations in mathematical programming for linear problems corresponding to L1 and L∞ norms are also provided, for both the separable and non-separable cases. We report results obtained for some standard benchmark problems, which confirmed that the performance of all the formulations is similar. As expected, the CPU time required for machines solvable with linear programming is much shorter.

    KW - Linear programming

    KW - Support vector machines

    UR - http://www.scopus.com/inward/record.url?scp=0034859938&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0034859938&partnerID=8YFLogxK

    U2 - 10.1016/S0167-8655(01)00071-X

    DO - 10.1016/S0167-8655(01)00071-X

    M3 - Article

    AN - SCOPUS:0034859938

    VL - 22

    SP - 1263

    EP - 1272

    JO - Pattern Recognition Letters

    JF - Pattern Recognition Letters

    SN - 0167-8655

    IS - 12

    ER -