Objection! Identifying Misclassified Malicious Activities with XAI

Koji Fujita*, Toshiki Shibahara, Daiki Chiba, Mitsuaki Akiyama, Masato Uchida

*この研究の対応する著者

研究成果査読

抄録

Many studies have been conducted to detect various malicious activities in cyberspace using classifiers built by machine learning. However, it is natural for any classifier to make mistakes, and hence, human verification is necessary. One method to address this issue is eXplainable AI (XAI), which provides a reason for the classification result. However, when the number of classification results to be verified is large, it is not realistic to check the output of the XAI for all cases. In addition, it is sometimes difficult to interpret the output of XAI. In this study, we propose a machine learning model called classification verifier that verifies the classification results by using the output of XAI as a feature and raises objections when there is doubt about the reliability of the classification results. The results of experiments on malicious website detection and malware detection show that the proposed classification verifier can efficiently identify misclassified malicious activities.

本文言語English
ページ(範囲)2065-2070
ページ数6
ジャーナルIEEE International Conference on Communications
2022-January
DOI
出版ステータスPublished - 2022
イベント2022 IEEE International Conference on Communications, ICC 2022 - Seoul, Korea, Republic of
継続期間: 2022 5月 162022 5月 20

ASJC Scopus subject areas

  • コンピュータ ネットワークおよび通信
  • 電子工学および電気工学

フィンガープリント

「Objection! Identifying Misclassified Malicious Activities with XAI」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル