Deep neural networks with flexible complexity while training based on neural ordinary differential equations

Zhengbo Luo, Sei Ichiro Kamata, Zitang Sun, Weilian Zhou

Research output: Contribution to journalConference articlepeer-review

Abstract

Most structures of deep neural networks (DNN) are with a fixed complexity of both computational cost (parameters and FLOPs) and the expressiveness. In this work, we experimentally investigate the effectiveness of using neural ordinary differential equations (NODEs) as a component to provide further depth to relatively shallower networks rather than stacked layers (depth) which achieved improvement with fewer parameters. Moreover, we construct deep neural networks with flexible complexity based on NODEs which enables the system to adjust its complexity while training. The proposed method achieved more parameter-efficient performance than stacking standard DNNs, and it alleviates the defect of the heavy cost required by NODEs.

Original languageEnglish
Pages (from-to)1690-1694
Number of pages5
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2021-June
DOIs
Publication statusPublished - 2021
Event2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada
Duration: 2021 Jun 62021 Jun 11

Keywords

  • Image classification
  • Neural networks
  • Neural ordinary differential equations
  • Supervised learning

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Deep neural networks with flexible complexity while training based on neural ordinary differential equations'. Together they form a unique fingerprint.

Cite this