Secure Machine Learning

AI

AI
Bildquelle: MacKenzie, Mike: "Machine Learning & Artificial Intelligence". August 2018. Creative commons 2.0. Image via www.vpnsrus.com.

Are you a developer dealing with Machine Learning (ML) in your work-related or private projects? In this case, we kindly ask you to participate in our short survey that aims to assess the awareness about implementing secure ML systems. With 15 minutes of your time, you can support a large research-community in advancing in the field of security for ML.

The survey is conducted by the Fraunhofer Institute for Applied and Integrated Security (AISEC) in cooperation with the Freie Universität Berlin (FU).

Go to survey...

For further questions please contact: securemachinelearning@aisec.fraunhofer.de


Additional information:

In recent years, machine learning and artificial intelligence have become major parts of most companies’ business models. They are employed for many different purposes, ranging from recommender systems that facilitate tasks for users, to internal process surveillance systems, supporting the companies’ workflows. With new machine learning technologies and tools appearing in a fast fashion, their integration usually needs to happen quickly. This guarantees a company to stay competitive and to meet the state of the art. Under such a pressure, security of the systems is usually not the main priority in the development process. 

However, with the growing number of machine learning applications in use, the number of attacks on such systems is growing. Such attacks contain simple model exploration, e.g. if a company is providing any kind of API to their model, to sophisticated tempering on the model, including training data poisoning and introducing hidden states that can lead to undesired prediction output. The consequences for the companies can be severe: image damage and economic loss due to wrong prediction outputs are just some examples of the broad range of potential negative consequences. 

Another arising issue is that, with the introduction of the GDPR, companies are required to guarantee protection of end-user data. Since machine learning models are often built on user data, those models contain information about them. In many scenarios, it is not fully understood how individual data records influence the prediction of a model and how much the model is giving away this data when being queried smartly. Hence, to protect end-user data, protecting the machine learning models is indispensable. 

This survey aims to analyze the state of the art in implementation of security measures to protect AI systems in companies. We want to investigate how far companies are aware of the different types of attacks (from inside and outside) that their models are exposed to. We also want to learn which kinds of protection are used. 

We then want to assess how much the introduction of the GDPR posed problems to the companies and how much the realization has already taken place and maybe at what point the companies would need assistance or more clearance.

bdr_logo_RGB_300ppi