A Novel Metric for Evaluating the Stability of XAI Explanations

A Novel Metric for Evaluating the Stability of XAI Explanations

Volume 9, Issue 1, Page No 133-142, 2024

Author’s Name: Falko Gawantkaa),1, Franz Just1, Marina Savelyeva1, Markus Wappler2, Jörg Lässig1,2

View Affiliations

1University of Applied Sciences Zittau/Görlitz, Faculty of Electrical Engineering and Computer Science, Görlitz, 02826, Germany
2Fraunhofer IOSB, Advanced System Technology (AST), Görlitz, 02826, Germany

a)whom correspondence should be addressed. E-mail: falko.gawantka@hszg.de

Adv. Sci. Technol. Eng. Syst. J. 9(1), 133-142 (2024); a  DOI: 10.25046/aj090113

Keywords: eXplainable AI, Evaluation, Stability of explanations

Share

35 Downloads

Export Citations

Automated systems are increasingly exerting influence on our lives, evident in scenarios like AI-driven candidate screening for jobs or loan applications. These scenarios often rely on eXplainable Artificial Intelligence (XAI) algorithms to meet legal requirements and provide understandable insights into critical processes. However, a significant challenge arises when some XAI methods lack determinism, resulting in the generation of different explanations for identical inputs (i.e., the same data instances and prediction model). The question of explanation stability becomes paramount in such cases. In this study, we introduce two intuitive methods for assessing the stability of XAI algorithms. A taxonomy was developed to categorize the evaluation criteria and the ideas were expanded to create an objective metric to classify the XAI algorithms based on their explanation stability.

Received: 15 November 2023, Revised: 16 January 2024, Accepted: 17 January 2024, Published Online: 21 February 2024

  1. F. Gawantka, A. Schulz, J. L¨assig, F. Just, “SkillDB – An Evaluation on the stability of XAI algorithms for a HR decision support system and the legal context,” in 2022 IEEE 21st International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC), 183–190, 2022, doi: 10.1109/ICCICC57084.2022.10101657.
  2. H. L¨ofstr¨om, K. Hammar, U. Johansson, “A Meta Survey of Quality Evaluation Criteria in Explanation Methods,” in J. De Weerdt, A. Polyvyanyy, editors, Intelligent Information Systems, 55–63, Springer International Publishing, Cham, 2022.
  3. D. V. Carvalho, E. M. Pereira, J. S. Cardoso, “Machine learning interpretability: A survey on methods and metrics,” Electronics, 8(8), 832, 2019.
  4. M. Nauta, J. Trienes, S. Pathak, E. Nguyen, M. Peters, Y. Schmitt, J. Schl¨otterer, M. van Keulen, C. Seifert, “From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai,” ACM Computing Surveys, 55(13s), 1–42, 2023.
  5. F. Gawantka, F. Just, M. Ullrich, M. Savelyeva, J. L¨assig, “Evaluation of XAI Methods in a FinTech Context,” in International Workshop on Artificial Intelligence and Pattern Recognition, 143–154, Springer, 2023.
  6. M. Schuld, F. Petruccione, Machine Learning with Quantum Computers, Quantum Science and Technology, Springer International Publishing, 2021.
  7. S. Russell, P. Norvig, Artificial Intelligence: A Modern Approach (4th Edition), Pearson, 2020.
  8. I. Kakogeorgiou, K. Karantzalos, “Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing,” International Journal of Applied Earth Observation and Geoinformation, 103, 102520, 2021, doi:https://doi.org/10.1016/j.jag.2021.102520.
  9. S. Hariharan, R. Rejimol Robinson, R. R. Prasad, C. Thomas, N. Balakrishnan, “XAI for intrusion detection system: comparing explanations based on global and local scope,” Journal of Computer Virology and Hacking Techniques, 1–23, 2022.
  10. C.-K. Yeh, C.-Y. Hsieh, A. Suggala, D. I. Inouye, P. K. Ravikumar, “On the (In)fidelity and Sensitivity of Explanations,” in H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch´e-Buc, E. Fox, R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32, Curran Associates, Inc., 2019.
  11. D. Minh, H. X. Wang, Y. F. Li, T. N. Nguyen, “Explainable artificial intelligence: a comprehensive review,” Artificial Intelligence Review, 1–66, 2021.
  12. V. Arya, R. K. E. Bellamy, P. Chen, A. Dhurandhar, M. Hind, S. C. Hoffman, S. Houde, Q. V. Liao, R. Luss, A. Mojsilovic, S. Mourad, P. Pedemonte, R. Raghavendra, J. T. Richards, P. Sattigeri, K. Shanmugam, M. Singh, K. R. Varshney, D.Wei, Y. Zhang, “One Explanation Does Not Fit All: A Toolkit and
    Taxonomy of AI Explainability Techniques,” CoRR, abs/1909.03012, 2019.
  13. Q. V. Liao, M. Singh, Y. Zhang, R. Bellamy, “Introduction to explainable AI,” in Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 1–3, 2021.
  14. A. Das, P. Rad, “Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey,” 2020, doi:10.48550/ARXIV.2006.11371.
  15. T. Speith, “A review of taxonomies of explainable artificial intelligence (XAI) methods,” in 2022 ACM Conference on Fairness, Accountability, and Transparency, 2239–2250, 2022.
  16. A. Dhurandhar, P.-Y. Chen, R. Luss, C.-C. Tu, P. Ting, K. Shanmugam, P. Das, “Explanations based on the missing: Towards contrastive explanations with pertinent negatives,” Advances in neural information processing systems, 31, 2018.
  17. M. T. Ribeiro, S. Singh, C. Guestrin, “”Why Should I Trust You?”: Explaining the Predictions of Any Classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, 1135–1144, Association for Computing Machinery, New York, NY, USA, 2016, doi:10.1145/2939672.2939778.
  18. S. M. Lundberg, S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, 4768–4777, Curran Associates Inc., Red Hook, NY, USA, 2017.
  19. K. Fr¨amling, “Decision theory meets explainable AI,” in Explainable, Transparent Autonomous Agents and Multi-Agent Systems: Second International Workshop, EXTRAAMAS 2020, Auckland, New Zealand, May 9–13, 2020, Revised Selected Papers 2, 57–74, Springer, 2020.
  20. R. W. Floyd, “Nondeterministic algorithms,” Journal of the ACM (JACM), 14(4), 636–644, 1967.
  21. M. Knap, “Model-Agnostic XAI Models: Benefits, Limitations and Research Directions,” 2022.
  22. A. Holzinger, A. Saranti, C. Molnar, P. Biecek, W. Samek, “Explainable AI methods-a brief overview,” in International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, 13–38, Springer, 2020.
  23. C. Molnar, “Interpretable machine learning,” https://christophm.github.io/interpretable-ml-book/, 2022, accessed on 04 16, 2023.
  24. R. Heese, S. M¨ucke, M. Jakobs, T. Gerlach, N. Piatkowski, “Shapley Values with Uncertain Value Functions,” in International Symposium on Intelligent Data Analysis, 156–168, Springer, 2023.
  25. K. Fr¨amling, “Contextual Importance and Utility: A Theoretical Foundation,” in G. Long, X. Yu, S. Wang, editors, AI 2021: Advances in Artificial Intelligence, 117–128, Springer International Publishing, Cham, 2022.
  26. K. Fr¨amling, “Explainable AI without Interpretable Model,” CoRR, abs/2009.13996, 2020.

Citations by Dimensions

Citations by PlumX