Springe direkt zu Inhalt

Vergangene Abschlussarbeiten

Abschlussarbeiten, die in Zusammenarbeit mit der Arbeitsgruppe bereits abgeschlossen wurden. Diese geben Einblicke in die Forschung der Arbeitsgruppe und dienen als Orientierung für zukünftige Arbeiten.

Unlocking Digital Trust: A Study of User Trust and Usability in a Digital Identity Wallet Concept

Doruntina Murtezaj, 2023

The digital era has marked a transition in the way individuals manage their identities, with digital identity wallet apps offering a promising solution. These apps have the potential to enhance and secure personal identification and offer a convenient alternative to traditional physical documents. However, the adoption of such apps depends on user trust, a complex construct influenced by individual differences and social norms. Therefore, this thesis investigates the trust dynamics of users of a digital identity wallet, using a multiple-phase approach of user studies. The wallet app under examination is a conceptual app prototype and not a fully developed real-world application. So, the first phase of the user studies served to gather feedback from users about their behaviour and perceptions of the wallet app and formed the basis for the following phases. In the second phase, users were presented with a version of the wallet app that had been improved in terms of security measures, support and information aspects. The third phase focused only on the wallet operator. User studies include interviews and validated surveys. The Human-Computer Trust Measure and System Usability Scale surveys are used to quantify user trust and system usability. The study found that users attached great importance to factors such as security, simple design and reputation of the wallet app operator. Participants expressed higher levels of trust when they knew the wallet app operator was a government entity. Improvements in usability had a positive effect on user trust. Adding more features to the app led to a slight decrease in the usability score. Lastly, practical recommendations to increase user trust include clear instructions, improved security measures, and transparent data handling policies.

Masterthesis as PDF (English)

Exploration of Checkpoints in the Context of Membership Inference Attacks

Marisa Nest, 2023

The field of machine learning (ML) has been growing over the last years. An increasing number of systems based on ML models, which are trained on a wide variety of data sets, are publicly accessible. Since more and more models are also based on data that contain private information that also implies that these models and the associated data must be protected in terms of privacy. A first step in protecting a model’s privacy is to evaluate its level of protection against attacks. One of such heuristic privacy evaluation methods that has become widespread in recent years are membership inference attacks (MIAs). In the past, the privacy assessment under MIA did not consider a temporal component of the model, but only considered the final model. This work now aims to test whether the addition of a temporal dimension in the form of so-called checkpoints in the context of MIA can serve to provide a better and more accurate picture of a model’s state of privacy. In order to test this, two exploratory experiments are conducted in which multiple time series analysis are performed. In addition, a new MIA using checkpoints is presented. In the end, it can be shown that in certain circumstances, especially when considering incorrectly classified data, checkpoints can help to provide a better evaluation of privacy and that the performance of the newly introduced MIA can compete with the performance of other recent MIA attacks.

Masterthesis as PDF (English)

Taxonomy of Privacy Attacks in Machine Learning

Yarmina Anna Meszaros

Machine learning has experienced a tremendous growth both in academic research as well as in real world applications. At the same time large amounts of data fuelled the concerns about security and privacy in machine learning. The result is a likewise growth of research in terms of privacy preserving machine learning. To ensure unambiguous communication in academic research we need a clear understanding of what separates one threat from another and where they have similarities. Most existing taxonomies in this field are either too specialised or not specialised enough for our cause. Therefore, this works proposes a taxonomy which takes already existing ones into account and aims to offer a possible categorisation for future attacks. In order to do so this work focuses on five privacy attacks which are formally introduced and explained further. Several existing papers on this topic were studied and compared to understand which are the most used and established terms and taxonomies in current literature. Besides that, the roles that play a part in private machine learning are presented and some possible criteria for classification into reasonable categories of privacy attacks are suggested. At the current time this is one of the first works that consider membership inference, model inversion, attribute inference, property inference and reconstruction attacks at the same time.

Bachelorthesis as PDF (English)

Group-based Membership Inference Attack against Machine Learning Models

Ina Fendel, 2022

Over the last few years, Machine Learning (ML) usage has spread to a wide range of applications, including areas that deal with highly sensitive data. Privacy preservation of data has thus become increasingly important. As ML models have been shown to leak their private training data, an essential part of protecting private data is to avoid privacy leakage of an ML  model’s private training set. One important attack risking the privacy of a training set is the membership inference attack (MIA). The MIA identifies an element’s membership, it analyzes whether a given data point is part of a given model’s training set. This thesis aims at developing new MIAs, especially the in this thesis referred to as group-based MIAs. Group-based MIAs determine the membership of an individual data point by exploiting the benefits of grouping elements during the process. For the novel MIAs, methods from the so-called dataset inference attack (DIA) are used. The DIA is a method for ownership resolution which determines whether a model was trained with another model’s training set. More importantly for this thesis, the DIA uses a novel method to differentiate between training set elements and other data points, which is applied to this thesis’s new MIAs. This thesis develops four novel MIAs based on the DIA, of which three are groupbased, and one does not utilize groups. All approaches were tested on two models trained with CIFAR10 and two further models with CIFAR100 as their training set. The attacks were evaluated with regard to their true positive rate (TPR) at a 0.1 % false positive rate (FPR) and the ROC curve with a log scale, which are metrics found to be suitable for MIAs in previous studies. The results showed that one group-based approach and the not group-based approach work in all settings, while the other two group-based MIAs only work with one of the two tested execution strategies. It was found that the other setup used too ambiguous groups for the attack to work. The experiments further showed that the working group-based MIAs outperform the not group-based approach. The most successful approach overall had a performance of at least 17.9 % TPR at 0.1 % FPR and at best 44.8 % TPR at 0.1 % FPR in the conducted experiments.

Bachelorthesis as PDF (English)

Privacy preserving synthetic data generation

William Gu, 2021

Privacy related questions with regards to the increasing usage of machine learning algorithms in real-world applications such as medical analysis have repeatedly come up in recent years, yet the answer to the measurement and applicability of privacy
guarantees in these application remains imprecise. With amassing of databases of sensitive data for training purposes and the potential risk to user privacy related to such repeated large-scale collections, generative algorithms such as GANs provide a potential solution to minimizing data harvesting by providing models producing unlimited synthetic data. Differential privacy as a way to illustrate and quantify the ability to share public information about data while simultaneously withholding
private information of individuals within such a collection has been gaining traction as a measurement of privacy with training machine learning models.

This work investigates the notion of differential privacy in its potential usability in different GANs trained on tabular medical data of patients.

As such, non-differentially private and differentially private GAN models were trained on two datasets (Pima Indian Diabetes and the UCI heart disease dataset collection) and the resulting generated synthetic data was then used as the basis for an ensemble of classifiers to evaluate their classification performance on.

Bachelorthesis als PDF (English)

Attacking Differentially Private CNNs Trained with PATE

Jannis Ihrig, 2021

The increasing prevalence of machine learning (ML) models processing privacysensitive data makes it necessary to develop and employ privacy-preserving techniques for these algorithms. While Differential privacy (DP) was established as a robust and widely accepted framework allowing to give privacy guarantees for algorithms processing sensitive data, the relation between given guarantees and resulting concrete privacy is not fully understood.

This thesis evaluates the privacy of models trained with Private Aggregation of Teacher Ensembles (PATE), a recently proposed approach to implement DP for ML algorithms. To this end, various ML models for different datasets and settings of PATE hyperparameters are created, and subsequently targeted by attacks that aim at inferring sensitive information on the datasets used to create them.

The privacy granted by PATE to individuals whose sensitive data is contained in datasets used to train models with the framework is tested by attacking these with selected known methods for membership inference. Conducted experiments find that the framework overall reduces their accuracy and therefore increases privacy on an individual level. With regards to distribution-level privacy, the applicability of model inversion and property inference attacks is discussed. While a sketch for the modification of existing property inference attacks is given that allows their application to models created with the help of PATE, it is argued that the framework specifies a setting for training and deployment that makes a meaningful application of model inversion attacks questionable. Finally, a new distance-based property inference attack is presented and successfully applied to differentially private models trained with PATE, showing that the framework does not prevent inference of sensitive information on the distribution level.

Masterthesis als PDF (English)

Evaluating and Adapting Existing Neural Network Watermarking Approaches to Online Learning Scenarios

Di Wang, 2021

To protect machine learning (ML) algorithms that are trained using expensive computational power and time, watermarks are applied on neural networks (NN) to prevent thefts of intellectual property. To further counteract the attempts at removing or forging watermarks, the need to make watermarks robust arises. With the growing amount of training data and the continuous nature of data production in time, online learning algorithms become also more relevant. This work investigates how to keep watermarks in NNs verifiable under online learning conditions, and proposes three strategies to keep watermark accuracies high during online learning scenarios; Namely re-feeding by filtering wrongly predicted watermarks, re-feeding by filtering watermarks predicted with low confidence, and re-feeding at constant interval of steps. Two watermark embedding approaches from previous work, namely ingrainer approach from Yang and exponential weighting approach from Namba, are used to watermark models for experiments, providing the conclusion that watermark re-feeding strategies need to be adapted to the particularities of specific embedding approaches to keep watermark accuracies high. It is shown that under online learning conditions, watermark accuracies of protected NNs can be maintained, however to differing degrees and through different strategies. For the ingrainer watermark embedding approach proposed by Yang [30], re-feeding by filtering watermarks predicted with low confidence works the best. For the exponential weighting approach proposed by Namba, the best re-feeding strategy turns out to be using 60% watermark and 40% of main training data at every 10th steps of online learning.

Bachelorthesis als PDF (English)

Evaluating Privacy of Synthetic Data through Metrics

Daniel Sosnovchyk, 2021

Making anonymized datasets public is proven to have inherent privacy risks. One option to avoid the privacy risks of publishing data, is not to disclose real data but generated data. Generation algorithms learn, as a simplification, the joint probability distributions of the real data and create a new dataset, called synthetic dataset. Though it is believed that disclosing synthetic data has little privacy risk, it is possible to overfit the real data. This would mean that the generated synthetic data is too close to the real data. If the synthetic data is too similar, it is possible to map the synthetic data records to the real ones, causing a privacy risk for individuals in the dataset. For this reason, the privacy of the generated data needs to be assessed. In this work, synthetic data will be generated and be analyzed using various metrics. One approach is to measure the similarity between the real and synthetic dataset. If the datasets are too close, the privacy is assumed to be low. Some similarity approaches used in analysis of the data are implemented in this work. Another approach will be created that estimates the probability of an adversary linking synthetic data to real data. Adding this metric will give a more rounded approach to estimate the degree of privacy of the generated data.

Bachelorthesis als PDF (English)

Practical Evaluation of Neural Network Watermarking Approaches

Tim von Känel, 2021

While the influence of deep learning models on real-world applications increases, one concern is on how to verify the model’s ownership after their distribution. An approach that is widely used in imagery and video is to use a watermark. In recent years multiple approaches have been made to embed watermarks into neural networks, yet there has been little work on their evaluation. This work implemented five of the most promising algorithms, outlined metrics for their comparison, and evaluated them against each other. Contradictory to the claims of their authors, major flaws are found in all but one of the algorithms and it is argued that DeepSigns white-box approach overall showed the best performance.

Bachelorthesis als PDF (English)

Personalizing Private Aggregation of Teacher Ensembles

Christopher Mühl, 2021

Due to the increasing number of applications for machine learning (ML) and the accompanying usage of sensitive data in the past years, privacy preservation is of high importance. Differential privacy (DP) has established itself as the most popular privacy definition in recent research besides many others. It enables worst-case privacy guarantees for all possible data at the same time. In practice, not all data require the same amount of privacy. Therefore, personalized DP which provides individual guarantees was proposed.
One state-of-the-art approach to preserve and to quantify DP for ML applications is the private aggregation of teacher ensembles (PATE). In a voting process, an ensemble of arbitrary ML models that was trained on partitions of sensitive data produces labels for a public unlabeled dataset. The DP of the sensitive data is measured during the votings. Afterwards, a target ML model is trained on the produced labels and the public data. Since the target model does not know any sensitive data, the privacy preservation is intuitively understandable.
In this thesis, three different extensions of the PATE approach that enable personalized DP are proposed. Depending on the privacy personalization, these approaches can reduce the privacy costs of data with high privacy preferences significantly by increasing the costs of data with lower preferences. Hence, more utility can be acquired and donors of sensitive data have the option to determine individual privacy preferences. Furthermore, it can be shown that data augmentation may improve the utility of teacher ensembles without increasing their privacy expenditure. Both improvements of PATE, namely personalization and data augmentation, enable more practical applications of privacy-preserving ML.

Masterthesis als PDF (English)

The Influence of Training Parameters and Architectural Choices on the Vulnerability of Neural Networks to Membership Inference Attacks

Oussama Bouanani, 2021

Machine Learning (ML) classifiers have been gaining in popularity and use in recent years as they offer the ability to solve complex decision-making problems in a variety of fields when trained with the  appropriate data. These ML models however pose a privacy threat when they learn from private data. Membership Inference Attacks (MIAs) present a privacy breach where given a data record and a query-access to a target classifier, an adversary attempts to determine whether the record was a member of the target's training dataset or not. Previous MIA-relevant work handled various aspects of implementing the attacks and evaluating it but lacked in studying in the impact that individual architectural choices have on the vulnerability of a models to these attacks. Therefore it is the interest of this thesis to explore the role that training parameters and architectural options play in the resiliency of a model to MIAs. In this work, a series of experiments are executed on one baseline model where for each experiment, the baseline is trained on different variations of one single training parameter and different MIA types are launched on each variation to measure the impact of each architectural choice on the baseline's exposure to the attacks. Results show that architectural choices that tackle overfitting by decreasing the gap of a model's performance between training and validation data provides more protection against MIAs. An analysis in the most consistent tendencies in all experiment variations highlights the generated loss values from a target model's prediction as the main factor that allows a MIA to determine the membership status of a data record, as results also show that MIAs are more successful when they are able to learn to differentiate between the distributions of loss values between member and non-member samples.

Bachelorthesis als PDF (English)

Application and Evaluation of Differential Privacy in Health Data Classification Tasks

Maika Krueger, 2020

Data driven medical research enables promising novel methods of clinical decision support and personalised medicine. For example machine learning (ML) models are developed to detect unknown relationships between biomedical parameters
to predict patients’ diagnosis. However, these benefits come with concerns about patients’ privacy. Privacy protection is considered to be an important problem in ML, especially in the health care sector, where research is based on sensitive information such as medical histories, genomic data or clinical records. A model trained on sensitive data may store this sensitive information during the training process. Analysing the models parameters or output can reveal this sensitive
information. Differential privacy (DP) is a strong mathematical framework that provides privacy guarantees in learning-based applications. Based on DP applications in several other areas its use has been proposed to protect an individual’s privacy in ML contexts. The Private Aggregation of Teacher Ensembles (PATE) is a state-of-the-art framework for private ML that is based on DP. This thesis presents the evaluation of PATE when it is applied to medical classification task using a small medical data set. Therefore, the PATE model was compared to a non private baseline model. To evaluate the ML algorithms used in the PATE framework, two logistic regression (LR) classifiers, a support vector machine (SVM) and two neural networks (NN) were compared using cross validation, confusion matrix, F1 – Score, receiver operating characteristic curve (ROC), area under the ROC curve (AUC) and accuracy. Subsequently, the number of teachers, the noise injection and the accuracy of the PATE model trained on the medical data were evaluated. Even with a small number of teachers the accuracy of the PATE model was 0.76. Compared to the baseline model (0.83), there is a low loss of accuracy, but still the guarantee of a strong privacy of ‘ = 2.16. Moreover, with a small number of teachers, the agreement of the teachers about a predicted outcome was higher. Nevertheless, there were also limitations due to the small data set. Small training data sets resulted in overfitting. In addition, the model was less robust against noise injection due to the small number of teachers. However, this thesis gives an introduction to the application of PATE on a health binary classification task using a small medical data set, thus providing an initial understanding of the application of PATE in in health care.

Bachelorthesis als PDF (English)

Entwurf eines hybriden Verschlüsselungsverfahrens aus einem klassischen und einem Post-Quanten-Verfahren

Marian Sigler, 2019

Die vorraussichtlich bevorstehende Entwicklung von großen Quantencomputern macht herkömmliche asymmetrische Verschlüsselungsverfahren wie RSA und Diffie-Hellman unsicher. Es sollte daher zeitnah auf quantencomputerresistente Verfahren, sogenannte Post-Quanten-Verfahren, umgestiegen werden. In dieser Arbeit werden einige solche Verfahren vorgestellt und verglichen.

Um trotz Bedenken hinsichtlich ihrer Sicherheit schnellstmöglich auf Post-Quanten-Kryptografie umsteigen zu können, können diese neuen Verfahren mit einem etablierten Verfahren zu einem Hybrid-Verfahren kombiniert werden. Es werden verschiedene Arten der Umsetzung eines solchen Verfahrens verglichen und zwei Vorzugsvarianten konkret definiert.

Masterthesis als PDF (Deutsch)

Differential Privacy: General Survey and Analysis of Practicability in the Context of Machine Learning

Franziska Boenisch, 2019

In recent years with data storage becoming more affordable, every day, increasingly large amounts of data are collected about everyone by different parties. The data collection allows to perform data analyses that help to improve software, track user behavior, recommend products, or even make large advances in the medical field. At the same time, the concern about privacy preservation is growing. Differential Privacy (DP) offers a solution to the potential conflict of interest between the wish for privacy preservation and the need to perform data analyses. Its goal is to allow expressive data analyses on a whole population while achieving a mathematically accurate definition of privacy for the individual.
This thesis aims to provide an understandable introduction to the field of DP and to the mechanisms that can be used to achieve it. It furthermore depicts DP in the context of different privacy preserving data analysis and publication methods. The main focus of the work lies on examining the practicability of DP in the context of
data analyses. We, therefore, present implementations of two differentially private linear regression methods and analyze the trade-offs between privacy and accuracy. We found that the adaptation of such a machine learning method to implement DP is non-trivial and that even when applied carefully, privacy always comes at the price of accuracy. On our dataset, even the better-performing differentially private linear regression with a reasonable level of privacy produces a mean squared error twice as high as the normal linear regression on non-privatized data. We, moreover, present two real-world applications of DP, namely Google’s RAPPOR algorithm and Apple’s implementation of DP in order to analyze the practicability at a large scale and to find possible limitations and drawbacks.

Masterthesis als PDF (English)

Benutzerfreundliches Werkzeug zur Codeanalyse zur Erkennung von Sicherheitslücken

Sandra Kostic, 2018

Es gibt dutzende von Analysewerkzeugen zur Erkennung von Sicherheitslücken in bereits verfasstem Code, sei es in Form von Plug-ins oder selbstständigen Werkzeugen. Trotz dessen existiert weiterhin das Problem, dass noch immer unnötige Sicherheitslücken in veröffentlichten Codes vorhanden sind, welche unter anderem Endnutzer erreichen, in Form einer App oder Desktop Anwendung und so Angreifern Zugang zu den privaten Daten des Endnutzers ermöglichen.

Das Konzept dieser Arbeit ist es, das Problem am Ursprung anzugehen, nämlich bei dem Programmierer, welcher den Code verfasst. Hierzu wurde ein Werkzeug zur Codeanalyse entwickelt, das die Programmierer für Sicherheitslücken sensibilisiert, vielfältige Hilfestellungen anbietet und dabei benutzerfreundlich ist.

Basierend auf einer im Rahmen dieser Abschlussarbeit selbst erstellten Umfrage, gerichtet an Programmierer, zur Erfassung derer Erfahrungen und Anforderungen an Werkzeugen zur Codeanalyse, wurde ein Prototyp entwickelt. Dieser Prototyp wurde anschließend von Studienteilnehmern getestet und von mir ausgewertet.

Das Ergebnis ist der Prototyp eines Werkzeugs, welches anhand der Ergebnisse der Auswertungen für potenzielle Sicherheitslücken sensibilisiert und dabei hilft, sicherer zu programmieren.

Masterthesis als PDF (Deutsch)

Property Testing of Physically Unclonable Functions

Christoph Graebnitz, 2018

This thesis deals with the properties that define a Physically Unclonable Function (PUF). Since a PUF should not be copyable, it makes sense to consider the predictability of a PUF through algorithms from the field of machine learning. PUFs can be interpreted as Boolean functions in their Fourier extension. This allows the determination of the degree-one weight of a PUF. Further, the degree-one weight of a Boolean function can serve as a metric that helps to identify the predictability of a Boolean function. For this reason, a central point of this work is the theoretical consideration of various probabilistic methods which can be used to approximate the degree-one weight of a Boolean function. The number of randomly selected inputs is essential to maintain an absolute error with a certain probability. The empirical studies carried out in this thesis partially prove, that under certain circumstances a smaller number of inputs is sufficient to maintain a particular absolute error than shown in theory.

Masterthesis als PDF (English)

Fourier Analysis of Arbiter Physical Unclonable Functions

Benjamin Zengin, 2017

Physical Unclonable Functions (PUFs) have emerged as a promising alternative to conventional mobile secure authentication devices. Instead of storing a cryptographic key in non-volatile memory, they provide a device specific challenge-response behavior uniquely determined by manufacturing variations. The Arbiter PUF has become a popular PUF representative due to its lightweight design and potentially large challenge space. Unfortunately, opposed to the PUFs definition, an Arbiter PUF can be cloned using machine learning attacks. Since PUFs can be described as Boolean functions, this thesis applies the concept of the Fourier expansion on Arbiter PUFs, focusing on the notion of influence. Based on this, statements about its Probably Approximately Correct learnability are deduced and discussed.

Masterthesis als PDF (English)

Analyse von Audio-Bypass-Bedrohungen auf Android-Smartphones und Konzeption von Abwehrmaßnahmen

Wolfgang Studier, 2017

Smartphones sind heutzutage ein alltäglicher Begleiter für die meisten Menschen im privaten als auch im beruflichen Umfeld. Dies sorgt dafür, dass Smartphones sich oft in unmittelbarer Nähe von Gesprächen befinden und damit zur Aufzeichnung von Gesprächen verwendet werden können, in denen eine große Vielfalt an Informationen mit verschiedenen Schutzbedürfnissen besprochen werden. In dieser Arbeit wird das Risiko für verschieden sensitive Informationen evaluiert, das von Audio-Bypass Angriffen auf Smartphones ausgeht.

Im Zuge der Analyse werden Audio-Bypass Bedrohungen basierend auf dem Baseband Subsystem sowie Android selbst behandelt. Hierzu werden die Systeme einzeln im Detail vorgestellt, um im späteren Verlauf die Bedrohung durch verschiedene Schwachstellen einschätzen zu können. Die anschließende Analyse der verschiedenen Angriffe, die zu einem Audio-Bypass führen könnten, wird mittels des Angriffspotentialmodells vorgenommen. Abschließend werden verschiedene Abwehrmaßnahmen gegen die möglichen Angriffe konzipiert und die aus den Angriffen entstehenden Risiken für verschieden sensitive Informationsklassen bewertet. Hierbei werden Empfehlungen zu Verhaltensweisen und zum Einsatz von Abwehrmaßnahmen, basierend auf dem Schutzbedürfnis der Informationen, ausgesprochen.

Masterthesis als PDF (Deutsch)

Security Analysis of Strong Physically Unclonable Functions

Tudor Alexis Andrei Soroceanu, 2017

Modern Cryptography is heavily based on the ability to securely store secret information. In the last decade Physical Unclonable Functions (PUFs) emerged as a possible alternative to non-volatile memory. PUFs promise a lightweight and lower-priced option compared to non-volatile memory, which has to be additionally secured and is known to be prone to reverse-engineering attacks. PUFs are traditionally divided into Weak PUFs and Strong PUFs, depending on the number of possible challenges.

One of the more popular Strong PUFs on silicon integrated circuits is that of the Arbiter PUF, in which two signals run through n stages, influenced by a challenge, and an arbiter that decides the output of the PUF depending on where a signal arrives first. As one single Arbiter PUF is easily to model and learn, Suh and Devadas proposed to combine the output of multiple Arbiter PUFs with an XOR, however this construction also turned out to be learnable. In this thesis we will investigate the use of different combining functions for Arbiter PUFs.

As combined Arbiter PUFs show structural similarity to linear feedback shift registers (LFSR) and nonlinear combination generators (the parallel use and combination of multiple LFSRs), we will carry out known attacks targeting the combining function on Arbiter PUFs. We will show that in order to prevent these attacks more sophisticated combining functions than XOR are needed. We propose a new class of Strong PUFs called Bent Arbiter PUFs, using Boolean bent functions as combiner. It turns out that Bent Arbiter PUFs are resistant against such kind of attacks. Future work must contain the analysis of the feasibility of Bent Arbiter PUFs against machine learning attacks. 

Masterthesis als PDF (Englisch)

Analyse eines neuen digitalen Signaturenverfahrens basierend auf Twisted-Edwards-Kurven

Johanna Jung, 2016

Diese Arbeit beschäftigt sich mit der Sicherheit digitaler Signaturverfahren der Elliptische-Kurven-Kryptographie. Das erste betrachtete Signaturverfahren ist der bereits 1998 standardisierte Elliptic Curve Digital Signature Algorithm (ECDSA).
Nach der Entdeckung von Twisted-Edwards-Kurven konzipierten Bernstein et al. den Edwards-curve Digital Signature Algorithm (EdDSA), ein Signaturverfahren auf Grundlage dieser Kurven. Er wurde 2012 veröffentlicht. Neben der Effizienz bietet das Verfahren auch Vorteile bezüglich der Sicherheit.

Ein Vergleich dieser Signaturverfahren zeigte, dass der EdDSA niedrigere Anforderungen an die verwendete Hashfunktion stellt als der ECDSA. Zudem erfordert der Signaturalgorithmus des EdDSA keinen Zufallszahlengenerator. Außerdem ist die Addition auf Twisted-Edwards-Kurven resistenter gegenüber Seitenkanalangriffen als die Addition auf Weierstraß-Kurven, die beim ECDSA eingesetzt werden. Auch die Skalarmultiplikation des EdDSA enthält eine Reihe von Maßnahmen gegen Seitenkanalangriffe.
Des Weiteren konnte die Sicherheit des EdDSA unter der Annahme, dass das Diskreter-Logarithmus-Problem für elliptische Kurven schwer ist, im Random-Oracle-Modell bewiesen werden. 

Masterthesis als PDF (Deutsch)

Security as a Service - Zentralisierte Sicherheitsmechanismen für kleine Institutionen aus Wirtschaft und Verwaltung

Benjamin Swiers, 2016

Die Aufgabe der IT-Sicherheit besteht darin, IT-Systeme, die darin gespeicherten und zu verarbeitenden Information sowie die Menschen, die diese Informationen verarbeiten, zu schützen. Die Etablierung und Umsetzung von IT-Sicherheit erfordert ein umfangreiches Expertenwissen, über das gerade kleine Institutionen aus Wirtschaft und Verwaltung oft nicht verfügen. Angesichts zunehmender Bedrohungen für IT-Systeme und der Knappheit finanzieller sowie personeller Ressourcen, sind besonders kleine Institutionen zunehmend mit der Umsetzung von IT-Sicherheit überfordert. Durch die Auslagerung von Sicherheitsmaßnahmen in ein Cloud-Umfeld, und der Bereitstellung dieser als Dienste, wird versucht dem genannten Problem zu begegnen. Die Auslagerung derartiger Dienste wird als Security as a Service (SecaaS) bezeichnet.

In der vorliegenden Masterarbeit wurde das Potential von Security as a Service für Behörden und kleine Unternehmen untersucht. Ferner wurde die Notwendigkeit derartiger Entwicklungen aufgezeigt. Um die Chancen und Risiken dieser Technologie abschätzen zu können, wurde ein Verfahren entwickelt, mit dem die Eignung von Sicherheitsmaßnahmen für eine bestimmte Umgebung anhand verschiedener Vergleichsmerkmale bestimmt werden kann. Mittels dieses Verfahrens wurden dezentralisierte und zentralisierte Umsetzungen ausgewählter Sicherheitsmaßnahmen anhand eines Fallbeispiels untersucht und verglichen.

Das Ergebnis dieser Untersuchung ergab, dass die Nutzung von Security as a Service verschiedene Vorteile mit sich bringen kann. Es konnte beobachtet werden, dass die Auslagerung von Sicherheitsmaßnahmen - in einigen Fällen - in einer Kostensenkung resultieren kann. Weiterhin führt die Auslagerung von komplizierten Maßnahmen zu einer erheblichen Entlastung des Personals sowie zur Steigerung der Benutzerakzeptanz. Da diese nur noch einfach auszuführende und vertraute Maßnahmen umsetzen müssen, kann die Gefahr von Anwenderfehlern und Umgehungsversuchen reduziert werden. In Kombination mit der Implementierung und Wartung dieser Dienste durch qualifiziertes Personal des Anbieters, kann die Qualität und Wirksamkeit von Sicherheitsmaßnahmen erhöht werden.

Trotz diverser Vorteile, birgt die Nutzung von Security as a Service jedoch auch Risiken. Die Auslagerung von Daten und Diensten in eine Cloud erfordert viel Vertrauen in die Fähigkeiten und die Zuverlässigkeit des Cloud-Anbieters. Infolgedessen bedarf die Auswahl des richtigen Anbieters - sowie die Ausgestaltung vertraglicher Vereinbarungen - eines hohen Maßes an Fachwissen. Trotz der vertraglichen Verpflichtung zur Einhaltung bestimmter Sicherheitsstandards besteht die Gefahr verschiedener Bedrohungen. Im Vergleich zur dezentralisierten Umsetzung von Sicherheitsmaßnahmen wären die Schäden eines erfolgreichen Angriffs, aufgrund der Bündelung der Daten in der Cloud, mitunter deutlich höher und weitreichender.

Untersuchung der Usability eines mobilen Prototypen für die Online-Ausweisfunktion

Sandra Kostic, 2016

Die Online-Ausweisfunktion ist eine Ergänzung zum neuen Personalausweis, mit der es gestattet ist, unter Zuhilfenahme eines separaten Lesegerätes, sich online auszuweisen und nur die Daten weiterzugeben, welche man selber weitergeben möchte. Um die Online-Ausweisfunktion mit einem dem Nutzern bekannten Gerät in Verbindung zu bringen, wurde die Idee entwickelt, statt eines zusätzlichen (kostenpflichtigen) Lesegerätes, ein Gerät zu verwenden, welches nahezu jeder Mensch in Deutschland besitzt - das Smartphone. Um festzustellen, ob der Umgang mit einer App auf dem Smartphone tatsächlich die Benutzbarkeit der Online-Ausweisfunktion verbessert, wurde eine sog. Usability Studie erarbeitet. Diese Usability Studie beruht auf der Vorstellung eines konzeptionellen Prototypen einer App. Dieser Prototyp wurde anschließend von Studienteilnehmern, mit Hilfe einer Umfrage, getestet, um festzustellen, wie gut benutzbar (usable) er ist.

Bachelorthesis als PDF (Deutsch)

Machbarkeitsstudie zur Online-Ausweisfunktion des neuen Personalausweises unter Android

Tim Ohlendorf, 2015

Mit der rapide fortschreitenden Digitalisierung der Gesellschaft gewinnt das Thema digitale Identitäten immer mehr an Relevanz. Die Online-Ausweisfunktion des neuen Personalausweises (nPA) stellt hierbei jedem Bürger der Bundesrepublik Deutschland die Möglichkeit bereit, sich im Internet mit Hilfe der eID-Infrastruktur bei hoheitlichen und nicht-hoheitlichen Dienstanbietern zu identifizieren. Dabei spielt nicht mehr nur die Nutzung am heimischen PC, sondern auch der Identitätsnachweis mit dem mobilen Endgerät eine große Rolle. Um die Online-Ausweisfunktion einer möglichst großen Masse auf Smartphone oder Tablet zugänglich zu machen, sind neue, nutzerzentrierte Ansätze für die mobile Nutzung der eID-Infrastruktur nötig. Die vorliegende Arbeit beschäftigt sich mit der Machbarkeit einer mobilen Applikation für die Nutzung der Online-Ausweisfunktion des nPA auf der Android Plattform. Das dahinterstehende Konzept setzt dabei auf das permanente Speichern, der für den sogenannten eID-Vorgang benötigten, personenbezogenen Ausweisdaten und Zertifikate, sowie Schlüsselmaterial im mobilen Endgerät. Der Benutzer benötigt folglich nur noch sein Smartphone oder Tablet, wenn er sich bei einem Dienstanbieter identifizieren möchte.

Um ein angemessenes Sicherheitsniveau für die schützenswerten Daten des nPA im Gerät gewährleisten zu können, wurde in einer verwandten Arbeit ein Sicherheitskonzept entwickelt, welches von verschiedenen mobilen Sicherheitskomponenten Gebrauch macht. Diese werden im Verlauf der Arbeit neben den Grundlagen zur Online-Ausweisfunktion erläutert und schließlich auf ihre Machbarkeit mit derzeit verfügbarer Hard- und Software unter Android geprüft.

Bachelorthesis als PDF (Deutsch)

Konzeption und Sicherheitsevaluierung einer mobilen Online-Ausweisfunktion

Florian Otterbein, 2015

Am 1. November 2010 wurde der neue Personalausweis in Deutschland eingeführt. Ausgestattet mit einem elektronischen Chip ermöglicht er Bürgern, sich online zu authentisieren. Für die Verwendung dieser Online-Ausweisfunktion ist neben dem Ausweisdokument ebenfalls eine Client-Software und ein Kartenleser notwendig. Ein Nachteil sind die hohen Anschaffungskosten des Kartenlesers, welche viele Nutzer abschrecken.

Parallel dazu gewinnt die Nutzung von Smartphones an Bedeutung. Mobile Bezahlsysteme wie Apple Pay und Android Pay, die sensible Daten der Benutzer verarbeiten und eine sichere Authentisierung anbieten, werden beliebter und von den Nutzern angenommen.
In dieser Thesis wird daher ein Konzept erarbeitet, das die Online-Ausweisfunktion des neuen Personalausweises ohne Kartenleser ermöglicht. Des Weiteren soll die eID-Funktion über das Smartphone zugänglich gemacht werden, um die Akzeptanzrate zu erhöhen. Das hier erläuterte Konzept ermöglicht das Speichern der eigenen Personalausweisdaten und einen anschließenden Authentisierungsprozess durch das Smartphone. In der Arbeit sind sowohl softwarebasierte, als auch hardwarebasierte Sicherheitslösungen für Android OS erläutert, die für besonders sensible Datenverarbeitungsprozesse und Speicherungen genutzt werden können.

In einer anschließenden Sicherheitsevaluierung findet eine Untersuchung des Konzepts auf mögliche Sicherheitsschwächen statt. Es wird gezeigt, dass keine sinnvollen Angriffe möglich sind. Ein Identitätsdiebstahl oder das Abfangen von sensiblen Daten kann ausgeschlossen werden. Weniger sicherheitskritische Angriffe sind möglich und können mit dem aktuellen Stand der Technik nicht ausgeschlossen werden. Eine Nutzung der Online-Ausweisfunktion auf dem Smartphone ist dadurch in der Theorie möglich, auf Grund der heterogenen Android-Landschaft jedoch schwer umsetzbar.

Masterthesis als PDF (Deutsch)