Abstract
We introduce two formulations for training support vector machines, based on considering the L1 and L∞ norms instead of the currently used L2 norm, and maximising the margin between the separating hyperplane and each data sets using L1 and L∞ distances. We exploit the geometrical properties of these different norms, and propose what kind of results should be expected for them. Formulations in mathematical programming for linear problems corresponding to L1 and L∞ norms are also provided, for both the separable and non-separable cases. We report results obtained for some standard benchmark problems, which confirmed that the performance of all the formulations is similar. As expected, the CPU time required for machines solvable with linear programming is much shorter.
Original language | English |
---|---|
Pages (from-to) | 1263-1272 |
Number of pages | 10 |
Journal | Pattern Recognition Letters |
Volume | 22 |
Issue number | 12 |
DOIs | |
Publication status | Published - 2001 |
Keywords
- Linear programming
- Support vector machines
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence