Springe direkt zu Inhalt

Bachelor-thesis Viktoria Andres

"Counterfactual Explanations for Time Series Classification"

Summary: Explanations are essential components in the promising fields of AI and Machine Learning. Deep learning approaches are rising due to their supremacy in terms of accuracy when trained with huge amount of data. Because of their black-box nature, the predictions are also supremely hard to comprehend, retrace or trust. Good explanation techniques can help to understand, why a system produces a certain prediction and therefore increase trust in the model. [1][2][5]
Studies had shown that counterfactual explanations in particular tend to be more informative and psychologically effective than other methods. [3][4]
This work focuses on a novel instance-based model-agnostic technique called "Native Guide'', that generates new counterfactual explanations on time series data classification using nearest neighbour samples from the real data distribution with class change as a foundation. [3][6]
By implementing the given algorithms, amplifying aspects like plausibility and diversity through selecting various neighbouring samples and comparing different models to each other, the Native Guide method is investigated and expanded. Finally a user study should evaluate the counterfactual explanations generated by the Native Guide.


1. P. Linardatos et al, "Explainable AI: A Review of Machine Learning Interpretability Methods", in Entropy, 2021, Available: https://dx.doi.org/10.3390/e23010018

2. A. Adani, M. Berrada, "Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)", in IEEEAccess, 12 October 2018, Available: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8466590

3. E. Delaney et al, "Instance-based Counterfactual Explanations for Time Series Classification", 2021, Available: https://arxiv.org/abs/2009.13211v2

4. R. Byrne, "Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning", in International Joint Conferences on Artificial Intelligence Organization, 2019, Available: https://www.ijcai.org/proceedings/2019/876

5. T. Rojat et al, "Explainable Artificial Intelligence (XAI) on Time Series Data: A Survey", 2 April 2021, Available: https://arxiv.org/pdf/2104.00950.pdf

6. E. Kenny, E. Delaney et al, "Post-hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective" in ICPR 2021, Available: https://doi.org/10.1007/978-3-030-68796-0_2


Advisor: Prof. Dr. Eirini Ntoutsi