Abstract
Neural networks are widely known to provide a method of approximating nonlinear functions. In order to clarify its approximation ability, a new theorem on an integral transform of ridge functions is presented. By using this theorem, an approximation bound, which evaluates the quantitative relationship between the approximation accuracy and the number of elements in the hidden layer, can be obtained. This result shows that the approximation accuracy depends on the smoothness of target functions. It also shows that the approximation methods which use ridge functions are free from the 'curse of dimensionality'.
Original language | English |
---|---|
Pages (from-to) | 947-956 |
Number of pages | 10 |
Journal | Neural Networks |
Volume | 9 |
Issue number | 6 |
DOIs | |
Publication status | Published - 1996 Aug |
Externally published | Yes |
Keywords
- approximation bound
- curse of dimensionality
- integral transform
- random coding
- ridge function
- three-layered network
ASJC Scopus subject areas
- Cognitive Neuroscience
- Artificial Intelligence