TY - GEN
T1 - Faithful Post-hoc Explanation of Recommendation Using Optimally Selected Features
AU - Morisawa, Shun
AU - Yamana, Hayato
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Recommendation systems have improved the accuracy of recommendations through the use of complex algorithms; however, users struggle to understand why the items are recommended and hence become anxious. Therefore, it is crucial to explain the reason for the recommended items to provide transparency and improve user satisfaction. Recent studies have adopted local interpretable model-agnostic explanations (LIME) as an interpretation model by treating the recommendation model as a black box; this approach is called a post-hoc approach. In this chapter, we propose a new method based on LIME to improve the model fidelity, i.e., the recall of the interpretation model to the recommendation model. Our idea is to select an optimal number of explainable features in the interpretation model instead of using complete features because the interpretation model becomes difficult to learn when the number of features increases. In addition, we propose a method to generate user-friendly explanations for users based on the features extracted by LIME. To the best of our knowledge, this study is the first one to provide a post-hoc explanation with subjective experiments involving users to confirm the effectiveness of the method. The experimental evaluation shows that our method outperforms the state-of-the-art method, named LIME-RS, with a 2.5%–2.7% higher model fidelity of top 50 recommended items. Furthermore, subjective evaluations conducted on 50 users for the generated explanations demonstrate that the proposed method is statistically superior to the baselines in terms of transparency, trust, and satisfaction.
AB - Recommendation systems have improved the accuracy of recommendations through the use of complex algorithms; however, users struggle to understand why the items are recommended and hence become anxious. Therefore, it is crucial to explain the reason for the recommended items to provide transparency and improve user satisfaction. Recent studies have adopted local interpretable model-agnostic explanations (LIME) as an interpretation model by treating the recommendation model as a black box; this approach is called a post-hoc approach. In this chapter, we propose a new method based on LIME to improve the model fidelity, i.e., the recall of the interpretation model to the recommendation model. Our idea is to select an optimal number of explainable features in the interpretation model instead of using complete features because the interpretation model becomes difficult to learn when the number of features increases. In addition, we propose a method to generate user-friendly explanations for users based on the features extracted by LIME. To the best of our knowledge, this study is the first one to provide a post-hoc explanation with subjective experiments involving users to confirm the effectiveness of the method. The experimental evaluation shows that our method outperforms the state-of-the-art method, named LIME-RS, with a 2.5%–2.7% higher model fidelity of top 50 recommended items. Furthermore, subjective evaluations conducted on 50 users for the generated explanations demonstrate that the proposed method is statistically superior to the baselines in terms of transparency, trust, and satisfaction.
KW - Explainable recommendation
KW - Interpretability
KW - Recommender system
UR - http://www.scopus.com/inward/record.url?scp=85120676449&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85120676449&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-89385-9_10
DO - 10.1007/978-3-030-89385-9_10
M3 - Conference contribution
AN - SCOPUS:85120676449
SN - 9783030893842
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 159
EP - 173
BT - Engineering Artificially Intelligent Systems - A Systems Engineering Approach to Realizing Synergistic Capabilities
A2 - Lawless, William F.
A2 - Llinas, James
A2 - Sofge, Donald A.
A2 - Mittu, Ranjeev
PB - Springer Science and Business Media Deutschland GmbH
T2 - Association for the Advancement of Artificial Intelligence Spring Symposium, AIAA 2021
Y2 - 22 March 2021 through 24 March 2021
ER -