Extracting the principal behavior of a probabilistic supervisor through neural networks ensemble.

Pitoyo Hartono, Shuji Hashimoto

    Research output: Contribution to journalArticle

    3 Citations (Scopus)

    Abstract

    In this paper, we propose a model of a neural network ensemble that can be trained with a supervisor having two kinds of input-output functions where the occurrence probability of each function is not even. This condition can be likened to a learning condition, in which the learning data are hampered by noise. In this case, the neural network has the impression that the learning supervisor (object) has a probabilistic behavior in which the supervisor generates correct learning data most of the time but occasionally generates erroneous ones. The objective is to train the neural network to approximate the greatest distributed input-output relation, which can be considered to be the principal nature of the supervisor, so that we can obtain a neural network that is able, to some extent, to suppress the ill effect of erroneous data encountered during the learning process.

    Original languageEnglish
    Pages (from-to)291-301
    Number of pages11
    JournalInternational Journal of Neural Systems
    Volume12
    Issue number3-4
    Publication statusPublished - 2002 Jun

    Fingerprint

    Supervisory personnel
    Neural networks

    ASJC Scopus subject areas

    • Computer Networks and Communications

    Cite this

    Extracting the principal behavior of a probabilistic supervisor through neural networks ensemble. / Hartono, Pitoyo; Hashimoto, Shuji.

    In: International Journal of Neural Systems, Vol. 12, No. 3-4, 06.2002, p. 291-301.

    Research output: Contribution to journalArticle

    @article{db0b37e681bf4d5d8d324bc3482bd187,
    title = "Extracting the principal behavior of a probabilistic supervisor through neural networks ensemble.",
    abstract = "In this paper, we propose a model of a neural network ensemble that can be trained with a supervisor having two kinds of input-output functions where the occurrence probability of each function is not even. This condition can be likened to a learning condition, in which the learning data are hampered by noise. In this case, the neural network has the impression that the learning supervisor (object) has a probabilistic behavior in which the supervisor generates correct learning data most of the time but occasionally generates erroneous ones. The objective is to train the neural network to approximate the greatest distributed input-output relation, which can be considered to be the principal nature of the supervisor, so that we can obtain a neural network that is able, to some extent, to suppress the ill effect of erroneous data encountered during the learning process.",
    author = "Pitoyo Hartono and Shuji Hashimoto",
    year = "2002",
    month = "6",
    language = "English",
    volume = "12",
    pages = "291--301",
    journal = "International Journal of Neural Systems",
    issn = "0129-0657",
    publisher = "World Scientific Publishing Co. Pte Ltd",
    number = "3-4",

    }

    TY - JOUR

    T1 - Extracting the principal behavior of a probabilistic supervisor through neural networks ensemble.

    AU - Hartono, Pitoyo

    AU - Hashimoto, Shuji

    PY - 2002/6

    Y1 - 2002/6

    N2 - In this paper, we propose a model of a neural network ensemble that can be trained with a supervisor having two kinds of input-output functions where the occurrence probability of each function is not even. This condition can be likened to a learning condition, in which the learning data are hampered by noise. In this case, the neural network has the impression that the learning supervisor (object) has a probabilistic behavior in which the supervisor generates correct learning data most of the time but occasionally generates erroneous ones. The objective is to train the neural network to approximate the greatest distributed input-output relation, which can be considered to be the principal nature of the supervisor, so that we can obtain a neural network that is able, to some extent, to suppress the ill effect of erroneous data encountered during the learning process.

    AB - In this paper, we propose a model of a neural network ensemble that can be trained with a supervisor having two kinds of input-output functions where the occurrence probability of each function is not even. This condition can be likened to a learning condition, in which the learning data are hampered by noise. In this case, the neural network has the impression that the learning supervisor (object) has a probabilistic behavior in which the supervisor generates correct learning data most of the time but occasionally generates erroneous ones. The objective is to train the neural network to approximate the greatest distributed input-output relation, which can be considered to be the principal nature of the supervisor, so that we can obtain a neural network that is able, to some extent, to suppress the ill effect of erroneous data encountered during the learning process.

    UR - http://www.scopus.com/inward/record.url?scp=0345721821&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0345721821&partnerID=8YFLogxK

    M3 - Article

    C2 - 12370956

    AN - SCOPUS:0345721821

    VL - 12

    SP - 291

    EP - 301

    JO - International Journal of Neural Systems

    JF - International Journal of Neural Systems

    SN - 0129-0657

    IS - 3-4

    ER -