Guest Lecture: "Privacy Preserving Machine Learning: Threats and Solutions" by Franziska Boenisch (Feb 01, 2021)
News from Jan 25, 2021
In recent years, privacy threats against user data have become more diverse. Attacks are no longer solely directed against databases where sensible data is stored but can also be applied to data analysis methods or their results directly. Thereby, they enable an adversary to learn potentially sensitive attributes of the data used for the analyses. This lecture aims at presenting common privacy threat spaces in data analysis methods with a special focus on machine learning. Next to a general view on privacy preservation and threat models, some very specific attacks against machine learning privacy are introduced (e.g. model inversion, membership inference). Additionally, a range of privacy-preservation methods for machine learning, such as differential privacy, homomorphic encryption, etc., are presented. Finally, their adequate application is discussed with respect to common threat spaces.
Franziska has completed a Master’s degree in Computer Science at Freie Universität Berlin and Technical University Eindhoven. For the past 1.5 years, she has been working at Fraunhofer AISEC as a Research Associate in topics related to Privacy Preserving Machine Learning, Data Protection, and Intellectual Property Protection for Neural Networks. Additionally, she is currently doing her PhD in Berlin.