SMACE: A New Method for the Interpretability of Composite Decision Systems - Laboratoire Jean-Alexandre Dieudonné Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

SMACE: A New Method for the Interpretability of Composite Decision Systems

Résumé

Interpretability is a pressing issue for decision systems. Many post hoc methods have been proposed to explain the predictions of a single machine learning model. However, business processes and decision systems are rarely centered around a unique model. These systems combine multiple models that produce key predictions, and then apply decision rules to generate the final decision. To explain such decisions, we propose the Semi-Model-Agnostic Contextual Explainer (SMACE), a new interpretability method that combines a geometric approach for decision rules with existing solutions for machine learning models to generate an intuitive feature ranking tailored to the end user. We show that established model-agnostic approaches produce poor results on tabular data in this setting, in particular giving the same importance to several features, whereas SMACE can rank them in a meaningful way.

Dates et versions

hal-03527129 , version 1 (15-01-2022)

Identifiants

Citer

Gianluigi Lopardo, Damien Garreau, Frédéric Precioso, Greger Ottosson. SMACE: A New Method for the Interpretability of Composite Decision Systems. ECML PKDD 2022 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Sep 2022, Grenoble, France. pp.325-339, ⟨10.1007/978-3-031-26387-3_20⟩. ⟨hal-03527129⟩
90 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More