Springe direkt zu Inhalt

FairNN - Conjoint Learning of Fair Representations for Fair Decisions

Eirini Ntoutsi, Tongxin Hu, Vasileios Iosifidis, Wentong Liao, Hang Zhang, Michael Ying Yang, Bodo Rosenhahn – 2020

In this paper, we propose "FairNN" a neural network that performs joint feature representation and classification for fairness-aware learning. Our approach optimizes a multi-objective loss function which (a) learns a fair representation by suppressing protected attributes (b) maintains the information content by minimizing the reconstruction loss and (c) allows for solving a classification task in a fair manner by minimizing the classification error and respecting the equalized odds-based fairness regularizer. Our experiments on a variety of datasets demonstrate that such a joint approach is superior to separate treatment of unfairness in representation learning or supervised learning. Additionally, our regularizers can be adaptively weighted to balance the different components of the loss function, thus allowing for a very general framework for conjoint fair representation learning and decision making.

Titel
FairNN - Conjoint Learning of Fair Representations for Fair Decisions
Verfasser
Eirini Ntoutsi, Tongxin Hu, Vasileios Iosifidis, Wentong Liao, Hang Zhang, Michael Ying Yang, Bodo Rosenhahn
Verlag
Springer Link
Schlagwörter
Fairness, Bias, Neural Networks, Auto-encoders
Datum
2020-10-15
Erschienen in
International Conference on Discovery Science , DS 2020: Discovery Science. Part of the Lecture Notes in Computer Science book series (LNCS, volume 12323).
Rechte
open access