Springe direkt zu Inhalt

Learning domain invariant representations by joint Wasserstein distance minimization

Léo Andéol, Yusei Kawakami, Yuichiro Wada, Takafumi Kanamori, Klaus-Robert Müller, Grégoire Montavon – 2023

Domain shifts in the training data are common in practical applications of machine learning; they occur for instance when the data is coming from different sources. Ideally, a ML model should work well independently of these shifts, for example, by learning a domain-invariant representation. However, common ML losses do not give strong guarantees on how consistently the ML model performs for different domains, in particular, whether the model performs well on a domain at the expense of its performance on another domain. In this paper, we build new theoretical foundations for this problem, by contributing a set of mathematical relations between classical losses for supervised ML and the Wasserstein distance in joint space (i.e. representation and output space). We show that classification or regression losses, when combined with a GAN-type discriminator between domains, form an upper-bound to the true Wasserstein distance between domains. This implies a more invariant representation and also more stable prediction performance across domains. Theoretical results are corroborated empirically on several image datasets. Our proposed approach systematically produces the highest minimum classification accuracy across domains, and the most invariant representation.

Titel
Learning domain invariant representations by joint Wasserstein distance minimization
Verfasser
Léo Andéol, Yusei Kawakami, Yuichiro Wada, Takafumi Kanamori, Klaus-Robert Müller, Grégoire Montavon
Verlag
Elsevier; to appear in "Neural Networks"
Schlagwörter
Domain invariance; Subpopulation shift; Joint distribution matching; Wasserstein distance; Neural networks; Supervised learning
Datum
2023-07
Kennung
ISSN: 0893-6080 , 1879-2782; DOI: 10.1016/j.neunet.2023.07.028
Sprache
eng
Art
Text
BibTeX Code
s: L. Andéol, Y. Kawakami, Y. Wada et al., Learning domain invariant
representations by joint Wasserstein distance minimization. Neural Networks (2023), doi:
https://doi.org/10.1016/j.neunet.2023.07.028.