In this paper, we propose a model of a neural network ensemble that can be trained with a supervisor having two kinds of input-output functions where the occurrence probability of each function is not even. This condition can be likened to a learning condition, in which the learning data are hampered by noise. In this case, the neural network has the impression that the learning supervisor (object) has a probabilistic behavior in which the supervisor generates correct learning data most of the time but occasionally generates erroneous ones. The objective is to train the neural network to approximate the greatest distributed input-output relation, which can be considered to be the principal nature of the supervisor, so that we can obtain a neural network that is able, to some extent, to suppress the ill effect of erroneous data encountered during the learning process.
|Number of pages||11|
|Journal||International Journal of Neural Systems|
|Publication status||Published - 2002 Jun|
ASJC Scopus subject areas
- Computer Networks and Communications