Faithful Post-hoc Explanation of Recommendation Using Optimally Selected Features

Shun Morisawa*, Hayato Yamana

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recommendation systems have improved the accuracy of recommendations through the use of complex algorithms; however, users struggle to understand why the items are recommended and hence become anxious. Therefore, it is crucial to explain the reason for the recommended items to provide transparency and improve user satisfaction. Recent studies have adopted local interpretable model-agnostic explanations (LIME) as an interpretation model by treating the recommendation model as a black box; this approach is called a post-hoc approach. In this chapter, we propose a new method based on LIME to improve the model fidelity, i.e., the recall of the interpretation model to the recommendation model. Our idea is to select an optimal number of explainable features in the interpretation model instead of using complete features because the interpretation model becomes difficult to learn when the number of features increases. In addition, we propose a method to generate user-friendly explanations for users based on the features extracted by LIME. To the best of our knowledge, this study is the first one to provide a post-hoc explanation with subjective experiments involving users to confirm the effectiveness of the method. The experimental evaluation shows that our method outperforms the state-of-the-art method, named LIME-RS, with a 2.5%–2.7% higher model fidelity of top 50 recommended items. Furthermore, subjective evaluations conducted on 50 users for the generated explanations demonstrate that the proposed method is statistically superior to the baselines in terms of transparency, trust, and satisfaction.

Original languageEnglish
Title of host publicationEngineering Artificially Intelligent Systems - A Systems Engineering Approach to Realizing Synergistic Capabilities
EditorsWilliam F. Lawless, James Llinas, Donald A. Sofge, Ranjeev Mittu
PublisherSpringer Science and Business Media Deutschland GmbH
Pages159-173
Number of pages15
ISBN (Print)9783030893842
DOIs
Publication statusPublished - 2021
EventAssociation for the Advancement of Artificial Intelligence Spring Symposium, AIAA 2021 - Virtual, online
Duration: 2021 Mar 222021 Mar 24

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13000 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceAssociation for the Advancement of Artificial Intelligence Spring Symposium, AIAA 2021
CityVirtual, online
Period21/3/2221/3/24

Keywords

  • Explainable recommendation
  • Interpretability
  • Recommender system

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Faithful Post-hoc Explanation of Recommendation Using Optimally Selected Features'. Together they form a unique fingerprint.

Cite this