Abstract
In this paper, we introduce a data-dependent kernel called deep quasi-linear kernel, which can directly gain a profit from a pre-trained feedforward deep network. Firstly, a multi-layer gated bilinear classifier is formulated to mimic the functionality of a feed-forward neural network. The only difference between them is that the activation values of hidden units in the multi-layer gated bilinear classifier are dependent on a pre-trained neural network rather than a pre-defined activation function. Secondly, we demonstrate the equivalence between the multi-layer gated bilinear classifier and an SVM with a deep quasi-linear kernel. By deriving a kernel composition function, traditional optimization algorithms for a kernel SVM can be directly implemented to implicitly optimize the parameters of the multi-layer gated bilinear classifier. Experimental results on different data sets show that our proposed classifier obtains an ability to outperform both an SVM with a RBF kernel and the pre-trained feedforward deep network.
Original language | English |
---|---|
Title of host publication | 2016 International Joint Conference on Neural Networks, IJCNN 2016 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1639-1645 |
Number of pages | 7 |
Volume | 2016-October |
ISBN (Electronic) | 9781509006199 |
DOIs | |
Publication status | Published - 2016 Oct 31 |
Event | 2016 International Joint Conference on Neural Networks, IJCNN 2016 - Vancouver, Canada Duration: 2016 Jul 24 → 2016 Jul 29 |
Other
Other | 2016 International Joint Conference on Neural Networks, IJCNN 2016 |
---|---|
Country | Canada |
City | Vancouver |
Period | 16/7/24 → 16/7/29 |
ASJC Scopus subject areas
- Software
- Artificial Intelligence