Paper 2023/1859
XorSHAP: Privacy-Preserving Explainable AI for Decision Tree Models
Abstract
Explainable AI (XAI) refers to the development of AI systems and machine learning models in a way that humans can understand, interpret and trust the predictions, decisions and outputs of these models. A common approach to explainability is feature importance, that is, determining which input features of the model have the most significant impact on the model prediction. Two major techniques for computing feature importance are LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). While very generic, these methods are computationally expensive even in plaintext. Applying them in the privacy-preserving setting when part or all of the input data is private is therefore a major computational challenge.
In this paper, we present
Metadata
- Available format(s)
-
PDF
- Category
- Applications
- Publication info
- Preprint.
- Keywords
- Explainable AIModel ExplainabilityGradient Boosting Decision TreesSHAP valuesSecure Multiparty Computation
- Contact author(s)
-
dimitar @ inpher io
marius @ inpher io - History
- 2023-12-06: approved
- 2023-12-04: received
- See all versions
- Short URL
- https://ia.cr/2023/1859
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2023/1859, author = {Dimitar Jetchev and Marius Vuille}, title = {{XorSHAP}: Privacy-Preserving Explainable {AI} for Decision Tree Models}, howpublished = {Cryptology {ePrint} Archive, Paper 2023/1859}, year = {2023}, url = {https://eprint.iacr.org/2023/1859} }