Tight lower bound of generalization error in ensemble learning

研究成果: Conference contribution

抄録

A machine learning method that integrates multiple component predictors into one predictor is referred to as ensemble learning. In this paper, we derive a tight lower bound of the generalization error in ensemble learning based on asymptotic analysis using a mathematical model based on an exponential mixture of probability distributions and the Kullback-Leibler divergence. In addition, we derive the explicit expression of the weight parameter used in the exponential mixture of probability distributions that minimizes the tight lower bound and discuss the property of the derived weight parameter.

本文言語English
ホスト出版物のタイトル2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014
出版社Institute of Electrical and Electronics Engineers Inc.
ページ1130-1133
ページ数4
ISBN(電子版)9781479959556
DOI
出版ステータスPublished - 2014 2 18
外部発表はい
イベント2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014 - Kitakyushu, Japan
継続期間: 2014 12 32014 12 6

Other

Other2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014
CountryJapan
CityKitakyushu
Period14/12/314/12/6

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

フィンガープリント 「Tight lower bound of generalization error in ensemble learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル