Most structures of deep neural networks (DNN) are with a fixed complexity of both computational cost (parameters and FLOPs) and the expressiveness. In this work, we experimentally investigate the effectiveness of using neural ordinary differential equations (NODEs) as a component to provide further depth to relatively shallower networks rather than stacked layers (depth) which achieved improvement with fewer parameters. Moreover, we construct deep neural networks with flexible complexity based on NODEs which enables the system to adjust its complexity while training. The proposed method achieved more parameter-efficient performance than stacking standard DNNs, and it alleviates the defect of the heavy cost required by NODEs.
|ジャーナル||ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings|
|出版ステータス||Published - 2021|
|イベント||2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada|
継続期間: 2021 6月 6 → 2021 6月 11
ASJC Scopus subject areas