Improving And Implementing A Playful Learning Experience To Educate About AI Capabilities Based On User Feedback
Requirements
- Required: Successful participation in the course "Human-Computer Interaction"
- Desirable: Successful participation in the seminar on "Interactive Intelligent Systems" and the lecture on "Wissenschaftliches Arbeiten in der Informatik"
-
Programming knowledge (Front-End such as React, HTML/CSS/JavaScript)
-
AI / Computer Vision (e.g., TensorFlow, handling pre-trained models)
Contents
This thesis builds upon a playful, educational interaction we presented at the 2025 LNDW. In this playful interaction, a user first plays a game in which they have to decide whether an image shows a husky or a wolf. First, the AI seems to perfectly recommend the correct choice, however for some images the AI seems to be off. In the current prototype the player first experiences this conflict and afterwards learns about what could have led to these errors. The goal is to learn about AI's capabilities in a playful way.
However, currently it is unclear whether the interaction leads to a better understanding of AI's capabilities in such cases, or even in general.
If you choose to work on this thesis, your objective would be to improve the prototype based on users' feedback and whether they understand the message. To do so, you would first receive feedback on the current prototype, derive improvement ideas, implement them and finish with a second feedback round.
Procedure
To reach this goal, the following steps can be taken:
- Get to know the current prototype,
- Read literature to understand human-AI collaboration with a focus on communicating how the AI works and what capabilities it has,
- Conduct interviews with at least 5 individuals and analyse them using a content analysis with MAXQDA,
- Implement improvements,
- Conduct interviews with at least 5 individuals to evaluate the updated prototype.
As this is an explorative thesis, your own ideas are welcome!
References
- Kim, S. S., Watkins, E. A., Russakovsky, O., Fong, R., & Monroy-Hernández, A. (2023, April). " help me help the ai": Understanding how explainability can support human-ai interaction. In _proceedings of the 2023 CHI conference on human factors in computing systems_ (pp. 1-17).
- Liu, H., Lai, V., & Tan, C. (2021). Understanding the effect of out-of-distribution examples and interactive explanations on human-ai decision making. _Proceedings of the ACM on Human-Computer Interaction_, _5_(CSCW2), 1-45.
- Rosella Gennari, Alessandra Melonio, Maria Angela Pellegrino, and Mauro D'Angelo. 2023. How to Playfully Teach AI to Young Learners: a Systematic Literature Review. In Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter (CHItaly '23). Association for Computing Machinery, New York, NY, USA, Article 1, 1–9. https://doi.org/10.1145/3605390.3605393
- Samuel, A., Cranefield, J., & Chiu, Y. T. (2023). AI to Human:“Help Me to Help You Collaborate More Effectively”-A Literature Review from a Human Capability Perspective.
