Springe direkt zu Inhalt

Offene Master-Arbeiten

Evaluation of different trust models

Sandra Kostic, available immediately

Every user makes the decision, depending on their own requirements, whether an application can be trusted or not and decides it consciously or subconsciously. Trust can be developed in different ways in applications, but can quickly be lost again. Particularly in the context of security-relevant applications, the more a user thinks about whether they are willing to share their own data with the service, for example. The problem is that although everyone has a feeling about trust, this is difficult to define as a concrete model.The question now is whether there are guidelines or models that designers and developers can follow that are more likely to lead to user trust in the application?

The purpose of this thesis is to research and evaluate existing trust models. In doing so, recommendations for suitable trust models will be made and the different problems of existing models will be identified and a proposal for improvement will be developed.
The thesis can be written in German or English.

Requirements

Basic knowledge of usability, usable security and UI/UX design. In the best case, a finished module in the context of Human Centered Computing or Usable Security can already be proven.

DANE Identity Manager

Nils Wisiol & Wolfgang Studier, available immediately

In contrast to traditional, unauthenticated DNS, DNSSEC enables the dissemination of information associated with DNS names to clients and end users, leveraging the client's trust in the DNSSEC root of trust. This authenticated (unidirectional) communication channel can be used to provide information about identities that are associated with the DNS name in question, in particular to provide cryptographic key material which is associated with a given DNS name. There exist various standards to provide different kinds of key material to the public (DNS-Based Authentication of Named Entities - DANE), including information about deployed TLS certificates, SSH key fingerprints, OpenPGP and S/MIME keys, and IPSec keys. In all usage scenarios, the publication of such identities via the DNS requires careful management, as errors can lead to degraded security or unavailability of the respective Internet service.

In this thesis, an "identity manager" for DANE shall be established. The DANE Identity Manager shall facilitate correct and seamless publication of relevant identities via the DNS. This involves both technical and user experience considerations: while the technical correctness aspect of published information includes concerns about details in the identity lifecycle such as key rollovers and key revocations, user experience must maximize accessibility as to minimize publication effort, while at the same time minimizing room for (human) error. Common errors in deployment that cannot be fixed or mitigated with automated solutions shall be communicated to the user so that manual action can be taken. The objective of the project is a case study based on a proof-of- concept implementation of the DANE Identity Manager. In the case study, it shall be demonstrated how the identity manager can be easily integrated in an existing IT environment both from a technical and a user-centered perspective, and guide the deployment of identities in the DNS during all steps of the identity lifecycle.

Requirements
  • knowledge in user-centered design
  • knowledge in frontend development (JavaScript etc.)
  • some knowledge in public key cryptography and digital signatures
  • some knowledge in DNS
  • some knowledge in Internet APIs
References

Choosing a Privacy Parameter for Differentially Private Machine Learning

Franziska Boenisch, Maija Poikela, available immediately

Differential Privacy (DP) is a mathematical framework that provides privacy guarantees to individual data points of a dataset when doing analyzes on that data. In DP, the amount of privacy is usually expressed by a privacy parameter called epsilon. A small value of epsilon corresponds to high privacy, whereas a large value of epsilon offers only weak privacy guarantees. However, the exact value of epsilon is always dependent on the dataset and on the ML model that is trained with that data. Therefore, this thesis investigates the dependence between epsilon, the ML model and the privacy level of data in depth. Based on the findings, it develops models for determinating an adequate value epsilon given a set of requirements. The resulting models should also be capable of combining data points that require different levels of privacy, and hence different values for epsilon.

Attacking Differentially Private Machine Learning Models

Franziska Boenisch, Maija Poikela, available immediately

Differential Privacy (DP) is a mathematical framework that provides privacy guarantees to individual data points of a dataset when doing analyzes on that data. In DP, the amount of privacy is usually expressed by a privacy parameter called epsilon. When choosing an inadequate value for epsilon, the resulting ML model may leak private information about the potentially sensitive training data. This thesis is intended to develop attacks against DP ML models. Such attacks may target the disclosure of specific data points’ or the entire dataset’s attributes, reverse-engineering of the model (model inversion), or the disclosure of (non-)membership of specific data points (membership inference). The developed attacks are implemented and used to evaluate existing DP ML models.

Studying Security Measures in Machine Learning Applications on GitHub

Franziska Boenisch, Maija Poikela, available immediately

In recent years, the number of machine learning applications has grown exponentially. However, due to limited resources (time, personnel, know-how), not a lot of focus has been given to securing those applications against the growing number of threats (data poisoning, adversarial learning, model exploration, dependencies of many libraries). 

The focus of this thesis is to depict the current state of the art in securing machine learning. The first part will be theoretical in form of a literature survey. It should consider the current literature summarizing the threats to machine learning, the attacks that are currently known, and the possible defenses. The second part will be practical in form from studying relevant machine learning code examples on GitHub qualitatively. The first task is to identify a number of relevant repositories (that provide machine learning applications and that have a user-base rather than being, for example, a student’s personal homework repository). The second task is to qualitatively analyze the security of the machine learning models by studying the measures taken for security, the dependencies of the libraries used, etc. The third task is to quantitatively study the security measures for machine learning on GitHub based on the large numbers of repositories that you have gone through. Initiating from that point, you can extend the scope of the thesis to several directions: perform user studies with developers to study why or why not are they applying any or certain measures, develop a new or implement an existing theoretical attack (and apply it to a specific application on GitHub), study security and vulnerability aspects of a library that you have found to appear commonly in machine learning.

In case of a good quality outcome, a scientific publication of the results is a possible prospect.

Please provide your CV and a short motivation letter.

Requirements

Ability to work independently and scientifically, deep understanding of machine learning, experience in working with GitHub, profound knowledge of at least one high-level programming language, interest in security and threat models. Also fluency in English, as well as good communication skills are a must.

Untersuchung und Implementierung von Verfahren zur sicheren Darstellung von Inhalten auf dem Bildschirm von Smartphones.

Marian Margraf, ab sofort

Immer häufiger verwenden Benutzer Apps, mit denen sich Kundenkarten und Mitgliedskarten auf mobilen Endgeräten speichern und auf dem Bildschirm z.B. als Barcode darstellen lassen. Visuelle Inhalte können aber beispielsweise durch Overlays manipuliert werden. Möchte man ein Smartphone als Ausweis auch offline nutzen könnte, müssten die auf dem Bildschirm dargestellten Inhalte gegen Manipulations- und Fälschungsversuche geschützt werden. Im Rahmen dieser Abschlussarbeit soll der aktuelle Stand der Wissenschaft und Technik, die Patentlage sowie derzeitige wie zukünftige technische Mittel zu Absicherung des Display-Inhalts (z.B. TEE) untersucht werden.

Voraussetzungen
  • Einarbeitung in hardwarebasierte Sicherheitsanker auf mobilen Endgeräten (TEE, SE)
  • gute Kenntnisse in der Entwicklung mobiler Anwendungen
  • Kenntnisse in kryptographischen Verfahren

Implementierung und Evaluierung der im Rahmen der Studie „Online-Ausweisfunktion auf dem Smartphone“ vorgestellten Umsetzung hardwarebasierter Schutzmaßnahmen für personenbezogene Daten auf dem iPhone.

Marian Margraf, ab sofort

Im Rahmen der Studie „Online Ausweisfunktion auf dem Smartphone“ wurde eine Reihe von hardwarebasierten Schutzmaßnahmen für personenbezogene Daten auf dem iPhone vorgestellt. Hierzu gehört u.a. die sichere Umsetzung einer Zwei-Faktor-Authentifizierung. Im Rahmen der Abschlussarbeit soll dieses Verfahren auf dem iPhone umgesetzt und die Ergebnisse evaluiert werden. Hierbei soll der gesamte Lebenszyklus – Registrierung, Verwendung, Sperrung, Wieder- aufnahme und Löschung – des Schlüsselmaterials sowie der damit verbundenen Identität betrachtet werden.

Voraussetzungen
  • Einarbeitung in den Themenkomplex e-ID
  • Einarbeitung in Sicherheitsrichtlinien des BSI
  • Einarbeitung in die technischen Richtlinien der eIDAS Verordnung
  • gute Kenntnisse in der Entwicklung mobiler Anwendungen
  • Kenntnisse in kryptographischen Verfahren

Implementierung und Evaluierung der im Rahmen der Studie „Online-Ausweisfunktion auf dem Smartphone“ vorgestellten Umsetzung hardwarebasierter Schutzmaßnahmen für personenbezogene Daten auf einem Android-Smartphone.

Marian Margraf, ab sofort

Im Rahmen der Studie „Online Ausweisfunktion auf dem Smartphone“ wurde eine Reihe von hardwarebasierten Schutzmaßnahmen für personenbezogene Daten auf Android-Smartphones vorgestellt. Hierzu gehört u.a. die sichere Umsetzung einer Zwei-Faktor-Authentifizierung. Im Rahmen der Abschlussarbeit soll dieses Verfahren auf einem Android-Smartphone umgesetzt und die Ergebnisse evaluiert werden. Hierbei soll der gesamte Lebenszyklus – Registrierung, Verwendung, Sperrung, Wiederaufnahme und Löschung – des Schlüsselmaterials sowie der damit verbundenen Identität betrachtet werden.

Voraussetzungen
  • Einarbeitung in den Themenkomplex e-ID
  • Einarbeitung in Sicherheitsrichtlinien des BSI
  • Einarbeitung in die technischen Richtlinien der eIDAS Verordnung
  • gute Kenntnisse in der Entwicklung mobiler Anwendungen
  • Kenntnisse in kryptographischen Verfahre