A nonparametric clustering algorithm with a quantile-based likelihood estimator

Hideitsu Hino, Noboru Murata

Research output: Contribution to journalLetter

3 Citations (Scopus)

Abstract

Clustering is a representative of unsupervised learning and one of the important approaches in exploratory data analysis. By its very nature, clustering without strong assumption on data distribution is desirable. Information-theoretic clustering is a class of clustering methods that optimize information-theoretic quantities such as entropy and mutual information. These quantities can be estimated in a nonparametric manner, and information-theoretic clustering algorithms are capable of capturing various intrinsic data structures. It is also possible to estimate information-theoretic quantities using a data set with sampling weight for each datum. Assuming the data set is sampled from a certain cluster and assigning different sampling weights depending on the clusters, the cluster-conditional information-theoretic quantities are estimated. In this letter, a simple iterative clustering algorithm is proposed based on a nonparametric estimator of the log likelihood for weighted data sets. The clustering algorithm is also derived from the principle of conditional entropy minimization with maximum entropy regularization. The proposed algorithm does not contain a tuning parameter. The algorithm is experimentally shown to be comparable to or outperform conventional nonparametric clustering methods.

Original languageEnglish
Pages (from-to)2074-2101
Number of pages28
JournalNeural Computation
Volume26
Issue number9
DOIs
Publication statusPublished - 2014 Sep 13

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

Fingerprint Dive into the research topics of 'A nonparametric clustering algorithm with a quantile-based likelihood estimator'. Together they form a unique fingerprint.

  • Cite this