Springe direkt zu Inhalt

Improving Human-AI collaboration

Principal Investigator:

Due to the increasing use of AI in high-stakes situations, it is becoming more relevant to tackle emerging problems in human-AI collaboration, such as overreliance and misuse. Since current Explainable AI (XAI) methods do not address these challenges, I design and evaluate explanations that go beyond technical XAI methods by considering human, task, and situational factors. 

Ultimately, my contribution will be more robust explanations adapted to specific scenarios (e.g., healthcare) based on mixed-methods user testings.

Currently, I am focusing on human-centered explanations such as uncertainty representation and guidance to augment user capabilities with AI.