Topological measurement of deep neural networks using persistent homology

Satoru Watanabe*, Hayato Yamana


研究成果: Article査読

3 被引用数 (Scopus)


The inner representation of deep neural networks (DNNs) is indecipherable, which makes it difficult to tune DNN models, control their training process, and interpret their outputs. In this paper, we propose a novel approach to investigate the inner representation of DNNs through topological data analysis (TDA). Persistent homology (PH), one of the outstanding methods in TDA, was employed for investigating the complexities of trained DNNs. We constructed clique complexes on trained DNNs and calculated the one-dimensional PH of DNNs. The PH reveals the combinational effects of multiple neurons in DNNs at different resolutions, which is difficult to be captured without using PH. Evaluations were conducted using fully connected networks (FCNs) and networks combining FCNs and convolutional neural networks (CNNs) trained on the MNIST and CIFAR-10 data sets. Evaluation results demonstrate that the PH of DNNs reflects both the excess of neurons and problem difficulty, making PH one of the prominent methods for investigating the inner representation of DNNs.

ジャーナルAnnals of Mathematics and Artificial Intelligence
出版ステータスPublished - 2022 1月

ASJC Scopus subject areas

  • 人工知能
  • 応用数学


「Topological measurement of deep neural networks using persistent homology」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。