Springe direkt zu Inhalt

Guest-Talk: "Discrimination Mitigation in Ranked Search Results" by Meike Zehlike, April 11th

News from Mar 21, 2019

April 11th, 2 pm, Meike Zehlike will give the talk: "Discrimination Mitigation in Ranked Search Results" at HCC Lab.

Meike Zehlike is a PhD researcher at Humboldt Universität zu Berlin, Germany. In 2014, she received her diploma degree for her work on recognition of perfusion disorders and vascular pathologies at the cerebral cortex. From 2014 until 2016 she worked as software developer and scrum master in Berlin. She started her PhD in April 2016 and spent February til June 2018 at UPF Barcelona as a visiting researcher. Meike’s research interests center around artificial intelligence and its social impact, automatic discrimination discovery and algorithmic fairness, as well as the use of artificial intelligence in medical applications. Currently, Meike is part of the DFG-funded graduate school SOAMED and a grantee of the DTL research grant 2017.

Abstract

Personalization and algorithmic decision-making are essential tools in our daily lives. They decide about the media we consume, the stories we read, the people we meet, the places we visit, whether we get a job, or if our loan request is approved. It is therefore of societal and ethical importance to ask whether these algorithms eventually produce results that demote, marginalize, or exclude individuals belonging to an unprivileged group or a minority. Early work in the field of “Fairness, Accountability and Transparency in socio-technical Systems” shows that various forms of bias and discrimination can arise in these systems, leading to systematical disadvantages for certain individuals and societal groups, as well as into distortion of competition. However as Cathy O'Neill illustrates in various examples, biases in computer systems can be difficult to identify due to the system’s complexity, yet “[. . .] a biased system has the potential of a widespread impact. If the system becomes a standard in the field the bias becomes pervasive [. . .]” and unlike in case of a biased individual, a biased system offers no possibility of negotiation to the victim.

In this talk I will present two methods to mitigate bias in ranked outputs: The first one reranks a given search result subject to fairness criteria that are based on a statistical significance test, while maintaining the ranking utility as high as possible. The second one modifies the loss function of a learning to rank method such that it considers not only the loss wrt to the training data, but also the amount of bias in a predicted result measured in terms of disparate exposure across different social groups.

Date: April 11th, 2019
Location: Königin-Luise-Straße 24-26, 14195 Berlin, room 120

35 / 48