Impact of Explanation Technique and Representation on Users' Comprehension and Confidence in Explainable AI
Résumé
Local explainability, an important sub-field of eXplainable AI, focuses on describing the decisions of AI models for individual use cases by providing the underlying relationships between a model's inputs and outputs. While the machine learning community has made substantial progress in improving explanation accuracy and completeness, these explanations are rarely evaluated by the final users. In this paper, we evaluate the impact of various explanation and representation techniques on users' comprehension and confidence. Through a user study on two different domains, we assessed three commonly used local explanation techniquesfeature-attribution, rule-based, and counterfactual-and explored how their visual representation-graphical or text-based-influences users' comprehension and trust. Our results show that the choice of explanation technique primarily affects user comprehension, whereas the graphical representation impacts user confidence.
CCS Concepts: • Human-centered computing → Empirical studies in HCI; • Computing methodologies → Artificial intelligence.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|---|
licence |