Tight lower bound of generalization error in ensemble learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A machine learning method that integrates multiple component predictors into one predictor is referred to as ensemble learning. In this paper, we derive a tight lower bound of the generalization error in ensemble learning based on asymptotic analysis using a mathematical model based on an exponential mixture of probability distributions and the Kullback-Leibler divergence. In addition, we derive the explicit expression of the weight parameter used in the exponential mixture of probability distributions that minimizes the tight lower bound and discuss the property of the derived weight parameter.

Original languageEnglish
Title of host publication2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1130-1133
Number of pages4
ISBN (Electronic)9781479959556
DOIs
Publication statusPublished - 2014 Feb 18
Externally publishedYes
Event2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014 - Kitakyushu, Japan
Duration: 2014 Dec 32014 Dec 6

Other

Other2014 Joint 7th International Conference on Soft Computing and Intelligent Systems, SCIS 2014 and 15th International Symposium on Advanced Intelligent Systems, ISIS 2014
CountryJapan
CityKitakyushu
Period14/12/314/12/6

Keywords

  • asymptotic analysis
  • ensemble learning
  • exponential mixture model
  • generalization error
  • parameter estimation

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Tight lower bound of generalization error in ensemble learning'. Together they form a unique fingerprint.

Cite this