Topological measurement of deep neural networks using persistent homology

Satoru Watanabe*, Hayato Yamana

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The inner representation of deep neural networks (DNNs) is indecipherable, which makes it difficult to tune DNN models, control their training process, and interpret their outputs. In this paper, we propose a novel approach to investigate the inner representation of DNNs through topological data analysis (TDA). Persistent homology (PH), one of the outstanding methods in TDA, was employed for investigating the complexities of trained DNNs. We constructed clique complexes on trained DNNs and calculated the one-dimensional PH of DNNs. The PH reveals the combinational effects of multiple neurons in DNNs at different resolutions, which is difficult to be captured without using PH. Evaluations were conducted using fully connected networks (FCNs) and networks combining FCNs and convolutional neural networks (CNNs) trained on the MNIST and CIFAR-10 data sets. Evaluation results demonstrate that the PH of DNNs reflects both the excess of neurons and problem difficulty, making PH one of the prominent methods for investigating the inner representation of DNNs.

Original languageEnglish
JournalAnnals of Mathematics and Artificial Intelligence
DOIs
Publication statusAccepted/In press - 2021

Keywords

  • Convolutional neural network
  • Deep neural network
  • Persistent Homology
  • Topological data analysis

ASJC Scopus subject areas

  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Topological measurement of deep neural networks using persistent homology'. Together they form a unique fingerprint.

Cite this