An interpretable neural network ensemble

Pitoyo Hartono, Shuji Hashimoto

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    2 Citations (Scopus)

    Abstract

    The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.

    Original languageEnglish
    Title of host publicationIECON Proceedings (Industrial Electronics Conference)
    Pages228-232
    Number of pages5
    DOIs
    Publication statusPublished - 2007
    Event33rd Annual Conference of the IEEE Industrial Electronics Society, IECON - Taipei
    Duration: 2007 Nov 52007 Nov 8

    Other

    Other33rd Annual Conference of the IEEE Industrial Electronics Society, IECON
    CityTaipei
    Period07/11/507/11/8

    Fingerprint

    Neural networks
    Classifiers

    ASJC Scopus subject areas

    • Electrical and Electronic Engineering

    Cite this

    Hartono, P., & Hashimoto, S. (2007). An interpretable neural network ensemble. In IECON Proceedings (Industrial Electronics Conference) (pp. 228-232). [4460332] https://doi.org/10.1109/IECON.2007.4460332

    An interpretable neural network ensemble. / Hartono, Pitoyo; Hashimoto, Shuji.

    IECON Proceedings (Industrial Electronics Conference). 2007. p. 228-232 4460332.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Hartono, P & Hashimoto, S 2007, An interpretable neural network ensemble. in IECON Proceedings (Industrial Electronics Conference)., 4460332, pp. 228-232, 33rd Annual Conference of the IEEE Industrial Electronics Society, IECON, Taipei, 07/11/5. https://doi.org/10.1109/IECON.2007.4460332
    Hartono P, Hashimoto S. An interpretable neural network ensemble. In IECON Proceedings (Industrial Electronics Conference). 2007. p. 228-232. 4460332 https://doi.org/10.1109/IECON.2007.4460332
    Hartono, Pitoyo ; Hashimoto, Shuji. / An interpretable neural network ensemble. IECON Proceedings (Industrial Electronics Conference). 2007. pp. 228-232
    @inproceedings{697ab273457e4d768055adf52cfcdb79,
    title = "An interpretable neural network ensemble",
    abstract = "The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.",
    author = "Pitoyo Hartono and Shuji Hashimoto",
    year = "2007",
    doi = "10.1109/IECON.2007.4460332",
    language = "English",
    isbn = "1424407834",
    pages = "228--232",
    booktitle = "IECON Proceedings (Industrial Electronics Conference)",

    }

    TY - GEN

    T1 - An interpretable neural network ensemble

    AU - Hartono, Pitoyo

    AU - Hashimoto, Shuji

    PY - 2007

    Y1 - 2007

    N2 - The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.

    AB - The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.

    UR - http://www.scopus.com/inward/record.url?scp=49949111981&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=49949111981&partnerID=8YFLogxK

    U2 - 10.1109/IECON.2007.4460332

    DO - 10.1109/IECON.2007.4460332

    M3 - Conference contribution

    AN - SCOPUS:49949111981

    SN - 1424407834

    SN - 9781424407835

    SP - 228

    EP - 232

    BT - IECON Proceedings (Industrial Electronics Conference)

    ER -