Keeping Conversations Private: Exploring and Designing Privacy-conforming Interactions with LLMs
Requirements
-
Required: Completion of the lectures on "Human-Computer Interaction" or "Data Visualization"
- Desirable: Successful participation in the seminar on “Interactive Intelligent Systems” and the lecture on "Wissenschaftliches Arbeiten in der Informatik"
Contents
Healthcare systems are increasingly confronted with rising patient demand and limited clinical resources. Staff in many settings face high workloads, leaving little capacity to provide continuous support to patients outside of direct treatment.
Recently, AI-driven conversational companions have been introduced in various healthcare contexts, designed to provide information, emotional support, or guidance to patients during times when clinical staff are unavailable. They are applied in diverse domains, ranging from mental health and chronic disease management to pre-clinical information gathering.
The use of chatbots based on large language models (LLMs) in these domains poses potential risks to patient privacy. There is an urgent need for privacy-conforming interaction concepts and explanation mechanisms for language models to address the privacy concerns of patients.
The thesis aims to conduct a scoping review of privacy-preserving interaction concepts for conversational AI, with a focus on different privacy metrics, the privacy-utility tradeoff, and reported user trust. The primary objective is to map the existing landscape of AI systems and tools that prioritize user privacy, identify design patterns, and derive implications for future research and practice.
The goal is to design and implement a first prototype of a privacy-preserving tool for conversational AI, which would facilitate secure and private interactions between users and AI systems. This tool should be evaluated through a user study to compare identified interaction concepts or validate which metrics are most meaningful to users in terms of privacy (e.g., data minimization, transparency, control over personal data).
The thesis combines literature analysis with human-centered evaluation, offering room for creativity and exploration. Students are encouraged to bring in their own ideas about which aspects to focus on in the evaluation study.
References
Zhang, S., Yi, X., Xing, H., Ye, L., Hu, Y., & Li, H. (2025). Adanonymizer: Interactively Navigating and Balancing the Duality of Privacy and Output Performance in Human-LLM Interaction (No. arXiv:2410.15044). arXiv. https://doi.org/10.48550/arXiv.2410.1504
Zhou, J., Xu, E., Wu, Y., & Li, T. (2025). Rescriber: Smaller-LLM-Powered User-Led Data Minimization for LLM-Based Chatbots. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–28. https://doi.org/10.1145/3706598.3713701
Ngong, I. C., Kadhe, S., Wang, H., Murugesan, K., Weisz, J. D., Dhurandhar, A., & Ramamurthy, K. N. (2024, December 4). Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents. Workshop on Socially Responsible Language Modelling Research.
Saglam R, B., Nurse, J. R. C., & Hodges, D. (2022). An Investigation Into the Sensitivity of Personal Information and Implications for Disclosure: A UK Perspective. Frontiers in Computer Science, 4, 908245. https://doi.org/10.3389/fcomp.2022.908245
