A deep quasi-linear kernel composition method for support vector machines

Weite Li, Takayuki Furuzuki, Benhui Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

In this paper, we introduce a data-dependent kernel called deep quasi-linear kernel, which can directly gain a profit from a pre-trained feedforward deep network. Firstly, a multi-layer gated bilinear classifier is formulated to mimic the functionality of a feed-forward neural network. The only difference between them is that the activation values of hidden units in the multi-layer gated bilinear classifier are dependent on a pre-trained neural network rather than a pre-defined activation function. Secondly, we demonstrate the equivalence between the multi-layer gated bilinear classifier and an SVM with a deep quasi-linear kernel. By deriving a kernel composition function, traditional optimization algorithms for a kernel SVM can be directly implemented to implicitly optimize the parameters of the multi-layer gated bilinear classifier. Experimental results on different data sets show that our proposed classifier obtains an ability to outperform both an SVM with a RBF kernel and the pre-trained feedforward deep network.

Original languageEnglish
Title of host publication2016 International Joint Conference on Neural Networks, IJCNN 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1639-1645
Number of pages7
Volume2016-October
ISBN (Electronic)9781509006199
DOIs
Publication statusPublished - 2016 Oct 31
Event2016 International Joint Conference on Neural Networks, IJCNN 2016 - Vancouver, Canada
Duration: 2016 Jul 242016 Jul 29

Other

Other2016 International Joint Conference on Neural Networks, IJCNN 2016
CountryCanada
CityVancouver
Period16/7/2416/7/29

Fingerprint

Support vector machines
Classifiers
Chemical analysis
Chemical activation
Feedforward neural networks
Profitability
Neural networks

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Li, W., Furuzuki, T., & Chen, B. (2016). A deep quasi-linear kernel composition method for support vector machines. In 2016 International Joint Conference on Neural Networks, IJCNN 2016 (Vol. 2016-October, pp. 1639-1645). [7727394] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IJCNN.2016.7727394

A deep quasi-linear kernel composition method for support vector machines. / Li, Weite; Furuzuki, Takayuki; Chen, Benhui.

2016 International Joint Conference on Neural Networks, IJCNN 2016. Vol. 2016-October Institute of Electrical and Electronics Engineers Inc., 2016. p. 1639-1645 7727394.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Li, W, Furuzuki, T & Chen, B 2016, A deep quasi-linear kernel composition method for support vector machines. in 2016 International Joint Conference on Neural Networks, IJCNN 2016. vol. 2016-October, 7727394, Institute of Electrical and Electronics Engineers Inc., pp. 1639-1645, 2016 International Joint Conference on Neural Networks, IJCNN 2016, Vancouver, Canada, 16/7/24. https://doi.org/10.1109/IJCNN.2016.7727394
Li W, Furuzuki T, Chen B. A deep quasi-linear kernel composition method for support vector machines. In 2016 International Joint Conference on Neural Networks, IJCNN 2016. Vol. 2016-October. Institute of Electrical and Electronics Engineers Inc. 2016. p. 1639-1645. 7727394 https://doi.org/10.1109/IJCNN.2016.7727394
Li, Weite ; Furuzuki, Takayuki ; Chen, Benhui. / A deep quasi-linear kernel composition method for support vector machines. 2016 International Joint Conference on Neural Networks, IJCNN 2016. Vol. 2016-October Institute of Electrical and Electronics Engineers Inc., 2016. pp. 1639-1645
@inproceedings{9d058b962820445d8404b82fd0cee7f2,
title = "A deep quasi-linear kernel composition method for support vector machines",
abstract = "In this paper, we introduce a data-dependent kernel called deep quasi-linear kernel, which can directly gain a profit from a pre-trained feedforward deep network. Firstly, a multi-layer gated bilinear classifier is formulated to mimic the functionality of a feed-forward neural network. The only difference between them is that the activation values of hidden units in the multi-layer gated bilinear classifier are dependent on a pre-trained neural network rather than a pre-defined activation function. Secondly, we demonstrate the equivalence between the multi-layer gated bilinear classifier and an SVM with a deep quasi-linear kernel. By deriving a kernel composition function, traditional optimization algorithms for a kernel SVM can be directly implemented to implicitly optimize the parameters of the multi-layer gated bilinear classifier. Experimental results on different data sets show that our proposed classifier obtains an ability to outperform both an SVM with a RBF kernel and the pre-trained feedforward deep network.",
author = "Weite Li and Takayuki Furuzuki and Benhui Chen",
year = "2016",
month = "10",
day = "31",
doi = "10.1109/IJCNN.2016.7727394",
language = "English",
volume = "2016-October",
pages = "1639--1645",
booktitle = "2016 International Joint Conference on Neural Networks, IJCNN 2016",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - A deep quasi-linear kernel composition method for support vector machines

AU - Li, Weite

AU - Furuzuki, Takayuki

AU - Chen, Benhui

PY - 2016/10/31

Y1 - 2016/10/31

N2 - In this paper, we introduce a data-dependent kernel called deep quasi-linear kernel, which can directly gain a profit from a pre-trained feedforward deep network. Firstly, a multi-layer gated bilinear classifier is formulated to mimic the functionality of a feed-forward neural network. The only difference between them is that the activation values of hidden units in the multi-layer gated bilinear classifier are dependent on a pre-trained neural network rather than a pre-defined activation function. Secondly, we demonstrate the equivalence between the multi-layer gated bilinear classifier and an SVM with a deep quasi-linear kernel. By deriving a kernel composition function, traditional optimization algorithms for a kernel SVM can be directly implemented to implicitly optimize the parameters of the multi-layer gated bilinear classifier. Experimental results on different data sets show that our proposed classifier obtains an ability to outperform both an SVM with a RBF kernel and the pre-trained feedforward deep network.

AB - In this paper, we introduce a data-dependent kernel called deep quasi-linear kernel, which can directly gain a profit from a pre-trained feedforward deep network. Firstly, a multi-layer gated bilinear classifier is formulated to mimic the functionality of a feed-forward neural network. The only difference between them is that the activation values of hidden units in the multi-layer gated bilinear classifier are dependent on a pre-trained neural network rather than a pre-defined activation function. Secondly, we demonstrate the equivalence between the multi-layer gated bilinear classifier and an SVM with a deep quasi-linear kernel. By deriving a kernel composition function, traditional optimization algorithms for a kernel SVM can be directly implemented to implicitly optimize the parameters of the multi-layer gated bilinear classifier. Experimental results on different data sets show that our proposed classifier obtains an ability to outperform both an SVM with a RBF kernel and the pre-trained feedforward deep network.

UR - http://www.scopus.com/inward/record.url?scp=85007275575&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85007275575&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2016.7727394

DO - 10.1109/IJCNN.2016.7727394

M3 - Conference contribution

VL - 2016-October

SP - 1639

EP - 1645

BT - 2016 International Joint Conference on Neural Networks, IJCNN 2016

PB - Institute of Electrical and Electronics Engineers Inc.

ER -