Interactive Generation of Counterfactual Explanations in AI-Supported Decision Making
Requirements
- Required: Successful participation in the course "Human-Computer Interaction"
- Desirable: Successful participation in the seminar on "Interactive Intelligent Systems" and the lecture on "Wissenschaftliches Arbeiten in der Informatik"
- Programming knowledge (Python, Front-End such as React, HTML/CSS/JavaScript or Streamlit)
- Understanding for Machine Learning Algorithms and Explainable AI (especially counterfactual explanations). Helpful: Gaussian Processes and Bayesian Optimization
- Interest in conducting user studies
Contents
This thesis explores how counterfactual explanations can be generated interactively in AI-supported decision systems. Instead of passively receiving explanations, users will be able to interactively explore alternative scenarios and refine explanations through feedback. A prototype will be developed in a simulated healthcare decision-making context and evaluated in a user study. The goal is to assess whether an interactive approach to counterfactuals improves the usefulness of AI explanations compared to static approaches.
Resources:
- Greta Warren, Ruth M. J. Byrne, and Mark T. Keane. 2024. Categorical and Continuous Features in Counterfactual Explanations of AI Systems. ACM Trans. Interact. Intell. Syst. 14, 4, Article 28 (December 2024), 37 pages. https://doi.org/10.1145/3673907
- Williamson JH, Oulasvirta A, Kristensson PO, Banovic N, eds. _Bayesian Methods for Interaction and Design_. Cambridge University Press; 2022. https://www.cambridge.org/core/books/bayesian-methods-for-interaction-and-design/721123C200F67FD94DA8DDFD561162A8
- Javier González, Zhenwen Dai, Andreas Damianou, and Neil D. Lawrence. 2017. Preferential Bayesian optimization. In Proceedings of the 34th International Conference on Machine Learning - Volume 70 (ICML'17). JMLR.org, 1282–1291.
- Kenny, E. M., & Keane, M. T. (2021). Explaining Deep Learning using examples: Optimal feature weighting methods for twin systems using post-hoc, explanation-by-example in XAI. Knowledge-Based Systems, 233, 107530. [https://doi.org/10.1016/j.knosys.2021.107530](https://doi.org/10.1016/j.knosys.2021.107530)
- Kenny, E. M., Ford, C., Quinn, M., & Keane, M. T. (2021). Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artificial Intelligence, 294, 103459.
- Ford, C., & Keane, M. T. (2022, August). Explaining classifications to non-experts: An xai user study of post-hoc explanations for a classifier when people lack expertise. In International Conference on Pattern Recognition (pp. 246-260). Cham: Springer Nature Switzerland.
