Springe direkt zu Inhalt

BIAS - Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions

Institution:

Artificial Intelligence and Machine Learning Group
Institute of Computer Science
Department of Mathematics & Computer Science

Förderung:

The Project is funded by THE VOLKSWAGEN FOUNDATION under the call  "ARTIFICIAL INTELLIGENCE AND THE SOCIETY OF THE FUTURE".

Projektlaufzeit:
01.12.2018 — 30.11.2022
VolkswagenStiftung

VolkswagenStiftung

bias_logo

bias_logo

AI techniques based on big data and algorithmic processing are increasingly used to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, and crime prediction. However, there are strong evidence that algorithms may sometimes amplify rather than eliminate existing bias and discrimination, and thereby have negative effects on social cohesion and on democratic institutions.

Scholarly reflection of these issues has begun but is still in its early stages, and we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which technical options to combat bias and discrimination are both realistically possible and normatively justified. The research group “BIAS” examines these issues in an integrated, interdisciplinary project bringing together experts from philosophy, law, and computer science. Our shared research question is: “How can standards of unbiased attitudes and non-discriminatory practices be met in big data analysis and algorithm-based decision-making?”

In approaching this question, the main goal of our computer science research team is to develop concrete technical (algorithmic and statistical) solutions (debiasing strategies, discrimination detection procedures etc.). Traditional Fairness-aware machine learning mainly focuses on single protected attributes. In reality, though, bias cannot be always attributed to a single attribute, rather multiple protected attributes (referred to as multi-fairness hereafter) can be the root causes of discrimination. This hardens the existing question further as often protected attributes showcase adversarial properties, and tuning for one attribute may intensify discrimination for the other. Thus, convergence of such problem is also subject to a question. Our current research statement is to develop ML-strategies for tackling multi-fairness with theoretical understanding and provable convergence guarantees.