Springe direkt zu Inhalt

2022 - A Review of the Year by Claudia Müller-Birn

Happy Holidays!

Happy Holidays!

News from Dec 13, 2022

The year is gradually drawing to a close; thus, I take the opportunity to reflect a bit on what we have achieved this year.

2022 was, again, very eventful. We had to and still have to face new societal challenges that also impacted our research. We have therefore focused even more strongly on how to bring value sensitive design more closely into practical tool development and education. We believe that we, as computer scientists, have a special responsibility since we shape our future environments. In this post, I want to reflect on this year from a research, educational, and team perspective.

HCC Research

In 2022, we could draw on what we had built up last year in many areas. A recurring theme in our research is the question of how we can support reflection on values in the development of technologies. What values are considered in the development of technologies? Who defines those values? Which values are chosen when values conflict? I want to highlight three selected research articles from the area of eXplainable AI (XAI) and privacy where we use this perspective to find new approaches for technology development. 

Researchers have proposed many explanation methods in the context of XAI (e.g. LIME or SHAP). These algorithms can provide different explanations, which may differ in their content and presentation (e.g., textual or visual). Such explanations are integrated with the actual machine learning result in a user interface. The interpretative burden is on the stakeholders to make sense of the explanations in their context of use. In research that I carried out together with Jesse Josua Benjamin, Christoph Kinkeldey and two of my students [1], we focused on explanation strategies that represent a concrete interpretation in the interaction between explanation and stakeholder. We wanted to better understand how people make sense of these explanations. Thus, we conducted a co-creation workshop by using four different explanation methods to better understand these strategies. Our research provided methodological and design implications. In XAI, we need to conduct additional co-creation workshops for studying human interpretation in context, since context changes the interpretation. Furthermore, we need more research on combinations of explanation methods to support often diverging explanatory strategies.

When we talk about XAI, we should emphasize that data is now being used in many areas of society, for example, in providing better healthcare. However, data can contain sensitive information about individuals. Privacy-preserving technologies, such as differential privacy (DP), can be employed to protect the privacy of individuals. In the research I conducted with Daniel Franzen and Peter Sörries from the HCC research group, as well as with colleagues from the Technische Universität Berlin, we explored to what extent we can communicate the privacy guarantee provided by DP to lay users by using specifically designed notifications, i.e., explanations [2]. For this, we adopted risk communication formats from the medical domain in conjunction with a model for privacy guarantees of DP to create quantitative privacy risk notifications. Our results suggest, amongst other things, that our proposed notifications can communicate objective information on DP similarly well to currently often used qualitative notifications. I am convinced that DP is a very important privacy-preserving technology for ensuring the value of privacy. However, properly communicating its functioning is a prerequisite so that it can and will be applied at a wider scale. 

As already mentioned, especially in the healthcare domain, individual data is increasingly being collected. One prominent example is the electronic patient record (ePA) in Germany, where major privacy concerns exist in its current technical realization. We are convinced that the digitization of the healthcare system should benefit the people, such as the insured and patients. Thus, David Leimstädtner, Peter Sörries and I have developed a participatory workshop method [3] that integrates approaches of value sensitive design and reflective design to explore patients' values and translate them into hypothetical, ideal design solutions for data donation contexts. The data gathered in the workshop are used to derive practicable design requirements for patient-oriented data donation technologies. We have had several promising workshops with patients already, and we are keen on applying this method in application areas beyond health care.

Value Map from the participatory workshop [3]. Image Credit: Peter Sörries

HCC Teaching

Also, in our teaching, we have integrated our value sensitive perspective. For example, in the area of Data Science, the experiences of the last few years have shown that social nuances are rarely captured in data, and ethical considerations are rarely taken into account during technology-driven AI development. Thus, together with Lars Sipos I provided the course »Human-Centered Data Science« as part of a course with computer science students for the second year. In addition, based on an interview study we conducted with students from the previous year, we developed a conceptual framework for fostering critical reflective practice in Data Science education [4] and evaluated this framework with this year's students. The analysis of results is still running, and we hope to integrate the insights into the syllabus of the next summer term.

HCC Data Lab & Event Contributions

After conceiving and establishing the digital formats of the HCC Data Lab, led by Katrin Glinka, throughout the last year, we strengthened our science communication and outreach efforts in 2022 by documenting and publishing the HCC's activities in short reports about events and public talks as well as other insights into our work in the Schaufenster.

After being limited to virtual events for two years, we were excited to start the year 2022 with the Coding IxD Design Fair in February 2022 at the Weizenbaum Institute. The students devised »neo-analog artifacts« that materialize novel interaction concepts to enable sovereign action and informed decision-making and were showcased in an interactive exhibition. 

In April, we participated in this year's Girls' Day with a virtual workshop that offered a look behind the scenes at virtual assistants. Diane Linke, Peter Sörries and Katrin Glinka interactively explored how chatbots are built. Since we needed to ensure that the workshop participants - girls* between the ages of 11 and 13 - could participate without opening their cameras or microphones, we facilitated a non-verbal workshop experience that would still allow for collective decision-making. 

With our contribution to the Long Night of the Sciences (LNDW) in July, we focused on communicating privacy risks associated with digital services. We conveyed the importance of privacy from the perspective of human-computer interaction and how to better protect it - without having to lead a life offline.

HCC Research Group at the LNDW 2022 | Image Credit: HCC

As part of the HCC Data Lab's goal to foster interdisciplinary knowledge exchange, Katrin Glinka conceived and led a hands-on session for the two-day workshop on »Automation, Control, and Incompetence« at the cluster of excellence »Matters of Activity« (MoA) on 2 December 2022. Within the MoA cluster, the HCC's research is focused on Robotic Assisted Surgery (RAS). As part of this research, Mario Cypko investigates how to enable assistance, transparency, and feedback in the human-robotic interaction in RAS. We have summarized our insights from the workshop on the design of responsible filters in a report.

Impression from the hands-on workshop | Image Credit: MoA

Responsible design, but in the context of privacy, was also at the center of my contribution to this year's »Forum Privatheit«. In my talk, I discussed how we can enable responsible technology design through participation. I primarily shared insights from the HCC research project »WerteRadar«. In this project, researchers from computer science, media pedagogy, and medicine work together to reconceptualize the process of health data donation in a value-oriented way.

New Project, New Team Members & Team Events 

2022 also marked the start of our research project, »enkis«. The project aims to establish sustainable study programs for responsible artificial intelligence at the Freie Universität Berlin. »enkis« focuses on fostering 'critical reflection' among future professionals, scientists, and non-technical experts regarding existing approaches, limitations, and potentials in AI. With the start of »enkis«, we welcomed Lars Sipos as a new team member in January 2022. In March 2022, David Leimstädtner completed the project team of »WerteRadar«, while Florian Berger joined the team as a teaching assistant. Finally, Sylvia Deter, who now runs the HCC's office, completed our team in August. 

After spending two years mainly at our home offices, we enjoyed a two-day retreat at Schloss Blankensee in March 2022. We discussed our research strategy and upcoming research topics, exchanged feedback on current publications, and discussed methods. We continued this exchange at our annual research day in October and discussed and refined research questions for upcoming publications, dissertations, and research projects.

The HCC Team at Schloss Blankensee | Photo Credit HCC

We celebrated our productive year with a fun Xmas team event last week that started with black light minigolf and concluded with a cosy visit to the christmas market.

End of Year Team Event | Photo Credit HCC

I look forward to continuing our work next year, hopefully with even more in-person events.

Finally, I wish you a peaceful, reflective, and relaxing holiday season in the company of your friends and families. Start well into the new year, and I look forward to continued cooperation.

Best,
Claudia Müller-Birn

References

[1] Jesse Josua Benjamin, Christoph Kinkeldey, Claudia Müller-Birn, Tim Korjakow, and Eva-Maria Herbst. 2022. Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability. Proceedings of the ACM Human-Computer Interaction 6, GROUP. New York: ACM.

[2] Daniel Franzen, Saskia Nuñez von Voigt, Peter Sörries, Florian Tschorsch, and Claudia Müller-Birn. 2022. Am I Private and If So, how Many? Communicating Privacy Guarantees of Differential Privacy with Risk Communication Formats. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS '22). New York: ACM.

[3] David Leimstädtner, Peter Sörries and Claudia Müller-Birn. 2022. Unfolding Values through Systematic Guidance: Conducting a Value-Centered Participatory Workshop for a Patient-Oriented Data Donation. In: Mensch und Computer 2022 - Tagungsband. New York: ACM.

[4] Claudia Müller-Birn und Lars Sipos. 2022. Human-Centered Data Science – Etablierung einer kritisch-reflexiven Praxis bei der Entwicklung von datengetriebener Software. In: Demmler, D., Krupka, D. & Federrath, H. (Hrsg.), INFORMATIK 2022. Bonn: Gesellschaft für Informatik.

You can find a complete overview of our publications here.

2 / 30