Springe direkt zu Inhalt

Short paper on Human-Centered Explainable AI was accepted at the Mensch und Computer (2023)

News from Jul 25, 2023

We are pleased to announce that our short paper on "Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank” [1] (Lars Sipos, Ulrike Schäfer, Katrin Glinka, and Claudia Müller-Birn) has been accepted at the Conference Mensch und Computer (2023).

Abstract

Explainable Artificial Intelligence (XAI) is concerned with making the decisions of AI systems interpretable to humans. Explanations are typically developed by AI experts and focus on algorithmic transparency and the inner workings of AI systems. Research has shown that such explanations do not meet the needs of users who do not have AI expertise. As a result, explanations are often ineffective in making system decisions interpretable and understandable. We aim to strengthen a socio-technical view of AI by following a Human-Centered Explainable Artificial Intelligence (HC-XAI) approach, which investigates the explanation needs of end-users (i.e., subject matter experts and lay users) in specific usage contexts. One of the most influential works in this area is the XAI Question Bank (XAIQB) by Liao et al. The authors propose a set of questions that end-users might ask when using an AI system, which in turn is intended to help developers and designers identify and address explanation needs. Although the XAIQB is widely referenced, there are few reports of its use in practice. In particular, it is unclear to what extent the XAIQB sufficiently captures the explanation needs of end-users and what potential problems exist in the practical application of the XAIQB. To explore these open questions, we used the XAIQB as the basis for analyzing 12 think-aloud software explorations with subject matter experts, i.e., art historians. We investigated the suitability of the XAIQB as a tool for identifying explanation needs in a specific usage context. Our analysis revealed a number of explanation needs that were missing from the question bank, but that emerged repeatedly as our study participants interacted with an AI system. We also found that some of the XAIQB questions were difficult to distinguish and required interpretation during use. Our contribution is an extension of the XAIQB with 11 new questions. In addition, we have expanded the descriptions of all new and existing questions to facilitate their use. We hope that this extension will enable HCI researchers and practitioners to use the XAIQB in practice and may provide a basis for future studies on the identification of explanation needs in different contexts.


[1] L. Sipos, U. Schäfer, K. Glinka, and C. Müller-Birn, “Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank.” doi: 10.1145/3603555.3608551, https://doi.org/10.48550/arXiv.2307.09369 (arxiv Pre-Print)

13 / 23