α-EM algorithm and α-ICA learning based upon extended logarithmic information measures

Yasuo Matsuyama*, Takeshi Niimoto, Naoto Katsumata, Yoshitaka Suzuki, Satoshi Furukawa

*この研究の対応する著者

    研究成果: Conference contribution

    7 被引用数 (Scopus)

    抄録

    The α-logarithm extends the logarithm as the special case of α = -1. Usage of α-related information measures based upon this extended logarithm is expected to be effective to speedup of convergence, i.e., on the improvement of learning aptitude. In this paper, two typical cases are investigated. One is the α-EM algorithm (α-Expectation-Maximization algorithm) which is derived from the α-log-likelihood ratio. The other is the α-ICA (α-Independent Component Analysis) which is formulated as minimizing the α-mutual information. In the derivation of both algorithms, the α-divergence plays the main role. For the α-EM algorithm, the reason for the speedup is explained using Hessian and Jacobian matrices for learning. For the α-ICA learning, methods of exploiting the past and future information are presented. Examples are shown on single-loop α-EM's and sample-based α-ICA's. In all cases, effective speedups are observed. Thus, this paper's examples together with formerly reported ones are evidences that the speed improvement by the α-logarithm is a general property beyond individual problems.

    本文言語English
    ホスト出版物のタイトルProceedings of the International Joint Conference on Neural Networks
    Place of PublicationPiscataway, NJ, United States
    出版社IEEE
    ページ351-356
    ページ数6
    3
    出版ステータスPublished - 2000
    イベントInternational Joint Conference on Neural Networks (IJCNN'2000) - Como, Italy
    継続期間: 2000 7月 242000 7月 27

    Other

    OtherInternational Joint Conference on Neural Networks (IJCNN'2000)
    CityComo, Italy
    Period00/7/2400/7/27

    ASJC Scopus subject areas

    • ソフトウェア

    フィンガープリント

    「α-EM algorithm and α-ICA learning based upon extended logarithmic information measures」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

    引用スタイル