An interpretable neural network ensemble

Pitoyo Hartono, Shuji Hashimoto

    研究成果: Conference contribution

    2 引用 (Scopus)

    抄録

    The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.

    元の言語English
    ホスト出版物のタイトルIECON Proceedings (Industrial Electronics Conference)
    ページ228-232
    ページ数5
    DOI
    出版物ステータスPublished - 2007
    イベント33rd Annual Conference of the IEEE Industrial Electronics Society, IECON - Taipei
    継続期間: 2007 11 52007 11 8

    Other

    Other33rd Annual Conference of the IEEE Industrial Electronics Society, IECON
    Taipei
    期間07/11/507/11/8

    Fingerprint

    Neural networks
    Classifiers

    ASJC Scopus subject areas

    • Electrical and Electronic Engineering

    これを引用

    Hartono, P., & Hashimoto, S. (2007). An interpretable neural network ensemble. : IECON Proceedings (Industrial Electronics Conference) (pp. 228-232). [4460332] https://doi.org/10.1109/IECON.2007.4460332

    An interpretable neural network ensemble. / Hartono, Pitoyo; Hashimoto, Shuji.

    IECON Proceedings (Industrial Electronics Conference). 2007. p. 228-232 4460332.

    研究成果: Conference contribution

    Hartono, P & Hashimoto, S 2007, An interpretable neural network ensemble. : IECON Proceedings (Industrial Electronics Conference)., 4460332, pp. 228-232, 33rd Annual Conference of the IEEE Industrial Electronics Society, IECON, Taipei, 07/11/5. https://doi.org/10.1109/IECON.2007.4460332
    Hartono P, Hashimoto S. An interpretable neural network ensemble. : IECON Proceedings (Industrial Electronics Conference). 2007. p. 228-232. 4460332 https://doi.org/10.1109/IECON.2007.4460332
    Hartono, Pitoyo ; Hashimoto, Shuji. / An interpretable neural network ensemble. IECON Proceedings (Industrial Electronics Conference). 2007. pp. 228-232
    @inproceedings{697ab273457e4d768055adf52cfcdb79,
    title = "An interpretable neural network ensemble",
    abstract = "The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.",
    author = "Pitoyo Hartono and Shuji Hashimoto",
    year = "2007",
    doi = "10.1109/IECON.2007.4460332",
    language = "English",
    isbn = "1424407834",
    pages = "228--232",
    booktitle = "IECON Proceedings (Industrial Electronics Conference)",

    }

    TY - GEN

    T1 - An interpretable neural network ensemble

    AU - Hartono, Pitoyo

    AU - Hashimoto, Shuji

    PY - 2007

    Y1 - 2007

    N2 - The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.

    AB - The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.

    UR - http://www.scopus.com/inward/record.url?scp=49949111981&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=49949111981&partnerID=8YFLogxK

    U2 - 10.1109/IECON.2007.4460332

    DO - 10.1109/IECON.2007.4460332

    M3 - Conference contribution

    AN - SCOPUS:49949111981

    SN - 1424407834

    SN - 9781424407835

    SP - 228

    EP - 232

    BT - IECON Proceedings (Industrial Electronics Conference)

    ER -