Springe direkt zu Inhalt

Exploring Levels of Automation in Mixed-Initiative User Interfaces for Interpretation Support of Machine Learning Results: a Pair Analytics Approach

Degree
Master of Science (M.Sc.)

Contents

Context/Problem

Machine Learning (ML) systems are part of our everyday life but due to their opaque computations, interpreting their results remains a major challenge. Specifically, there is a need to develop strategies and techniques to support people without formal ML education with the interpretation of such results. However, interpretation is a subjective and context-specific process and, therefore, hard to formalize. Since interpretability has a domain-specific notion there cannot be an all-purpose definition and, depending on the context, different types of explanations might be useful (Carvalho et al, 2019). One approach to account for context-specific aspects of interpretability is the use of mixed-initiative interfaces that couple automated services with direct manual manipulation (Allen, 1999; Horvitz, 1999). Current research already gives valuable insights on how to design explanations for non-technical users (Carvalho et al, 2019; Ribera, 2019; Liao et al., 2020). 

Designing explanations provided in a mixed-initiative manner requires knowledge about what information may actually help non-technical people in interpreting ML results. Capturing how explanations are being used remains a challenge since most methodologies currently used in user studies are quantitative approaches such as online experiments. Due to their nature they are limited in assessing how people actually make interpretations. In the domain of visual analytics, an alternative method called pair analytics has been suggested to counter this challenge (Arias-Hernandez 2011). It incorporates two experts with different backgrounds (technical / domain expertise) to solve tasks collaboratively. This method has the potential to augment insights from observational studies towards explicit reasoning processes as discussed among the experts, compared to qualitative methods involving single participants such as the think-aloud method. The process of collaborative task solving leads to discussions between the participants that may reveal information on the participant's reasoning processes, and how these may differ between or depend on technical and domain expertise.

Objectives/Procedure

This MSc thesis project extends existing insights from research on explanations (Carvalho et al, 2019; Ribera, 2019; Liao et al., 2020) to explore designs for mixed initiative interfaces to support interpretation of ML results. A special focus is laid upon different levels of automation. The candidate reviews the literature to develop an overview of suitable designs for explanations provided by mixed initiative interfaces. 

In addition, the candidate reviews the material of a pair analytics study conducted by the HCC research group (transcripts, recordings etc.). The candidate's observation focuses on when and how the ML expert supports the domain expert, i.e. which verbal or bodily cues the ML expert employs. The analysis is done based on a verbal protocol analysis (Miller and Brewer 2003). The results from the observation help the candidate narrow down the design space from the literature related to explanations using mixed-initiative interfaces.

Based on the developed design space the candidate comes up with designs for specific interfaces that potentially support the different levels of automation, tailored to the use case from the bespoke pair analytics study. The candidate suggests designs, implements and evaluates interactive intelligent user interfaces that potentially represent the role of the ML expert. For evaluation of the interfaces the candidate develops a modified version of the pair analytics method with the semi-automated interfaces substituting the ML expert (i.e., the interface becomes the ML expert). In a user study (preferably offline) implementing this method, the candidate evaluates the different versions of the interface regarding interpretability. From the observations of the study the candidate derives guidelines for the design of mixed-initiative interfaces for ML interpretation support.

References

Allen, J.E., Guinn, C.I. and Horvtz, E., 1999. Mixed-initiative interaction. IEEE Intelligent Systems and their Applications, 14(5), pp.14-23.

Arias-Hernandez, R., Kaastra, L.T., Green, T.M., Fisher, B., 2011. Pair Analytics: Capturing Reasoning Processes in Collaborative Visual Analytics, in: 2011 44th Hawaii International Conference on System Sciences. Presented at the 2011 44th Hawaii International Conference on System Sciences (HICSS 2011), IEEE, Kauai, HI, pp. 1–10. https://doi.org/10.1109/HICSS.2011.339

Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, 8(8), 832–34. http://doi.org/10.3390/electronics8080832

Horvitz, E. (1999). Principles of mixed-initiative user interfaces (pp. 159–166). Presented at the the SIGCHI conference, New York, New York, USA: ACM Press. http://doi.org/10.1145/302979.303030

Liao, Q. V., Gruen, D., & Miller, S. (2020, January 8). Questioning the AI: Informing Design Practices for Explainable AI User Experiences. http://doi.org/10.1145/3313831.3376590

Miller, R., Brewer, J., 2003. The A-Z of Social Research. SAGE Publications, Ltd, 1 Oliver’s Yard, 55 City Road, London England EC1Y 1SP United Kingdom. https://doi.org/10.4135/9780857020024

Ribera, M., Workshops, A. L. I., 2019. (n.d.). Can we do better explanations? A proposal of user-centered explainable AI. Pdfs.Semanticscholar.org