Non-local information for a mixture of multiple linear classifiers

Weite Li, Peifeng Liang, Xin Yuan, Jinglu Hu

研究成果: Conference contribution

8 被引用数 (Scopus)

抄録

For many problems in machine learning fields, the data are nonlinearly distributed. One popular way to tackle this kind of data is training a local kernel machine or a mixture of several locally linear models. However, both of these approaches heavily relies on local information, such as neighbor relations of each data sample, to capture potential data distribution. In this paper, we show the non-local information is more efficient for data representation. With an implementation of a winner-take-all autoencoder, several non-local templates are trained to trace the data distribution and to represent each sample in different subspaces with a suitable weight. By training a linear model for each subspace in a divide and conquer manner, one single support vector machine can be formulated to solve nonlinear classification problems. Experimental results demonstrate that a mixture of multiple linear classifiers from non-local information performs better than or is at least competitive with state-of-the-art mixtures of locally linear models.

本文言語English
ホスト出版物のタイトル2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings
出版社Institute of Electrical and Electronics Engineers Inc.
ページ3741-3746
ページ数6
ISBN(電子版)9781509061815
DOI
出版ステータスPublished - 2017 6月 30
イベント2017 International Joint Conference on Neural Networks, IJCNN 2017 - Anchorage, United States
継続期間: 2017 5月 142017 5月 19

出版物シリーズ

名前Proceedings of the International Joint Conference on Neural Networks
2017-May

Other

Other2017 International Joint Conference on Neural Networks, IJCNN 2017
国/地域United States
CityAnchorage
Period17/5/1417/5/19

ASJC Scopus subject areas

  • ソフトウェア
  • 人工知能

フィンガープリント

「Non-local information for a mixture of multiple linear classifiers」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル