The global optimum of shallow neural network is attained by ridgelet transform

Sho Sonoda, Isao Ishikawa, Masahiro Ikeda, Kei Hagihara, Yoshihiro Sawano, Takuo Matsubara, Noboru Murata

Research output: Contribution to journalArticlepeer-review

Abstract

We prove that the global minimum of the backpropagation (BP) training problem of neural networks with an arbitrary nonlinear activation is given by the ridgelet transform. A series of computational experiments show that there exists an interesting similarity between the scatter plot of hidden parameters in a shallow neural network after the BP training and the spectrum of the ridgelet transform. By introducing a continuous model of neural networks, we reduce the training problem to a convex optimization in an infinite dimensional Hilbert space, and obtain the explicit expression of the global optimizer via the ridgelet transform.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2018 May 19

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'The global optimum of shallow neural network is attained by ridgelet transform'. Together they form a unique fingerprint.

Cite this