|Dozent/in||Prof. Dr. Matthias Wählisch|
|Institution||Institute of Computer Science|
Freie Universität Berlin
|Raum||Depends on meeting date|
|Beginn||10.02.2011 | 14:00|
Vorbesprechung: Mittwoch, 14.04.2010, 13:00 Uhr (c.t.), SR046
Zwischenbesprechung: 30.04.2010, 09:00 Uhr (s.t.), SR055
Attention: Students that do not meet all deadlines listed in the timetable, will lose the right to take part in the final presentation.
The talks will be given according to this schedule:
|Thursday, 1.7.2010, Room KR 137:|
|10:00 - 10:30||Mirjam Fabian||Security in RFID Systems|
|10:30 - 11:00||Oliver Türpe||Compare encryption algorithms: Skipjack vs. AES|
|11:00 - 11:30||Martin Weigelt||A survey of virtualization technologies|
|11:30 - 12:00||Break|
|12:00 - 12:30||Benjamin Valentin||Speichermodell der C Programmiersprache|
|12:30 - 13:00||Esra Ünal||The Byzantine Generals' Problem|
|13:00 - 13:30||Phillip Berndt||Random-Number Generation for Testbeds and Simulation|
|Friday, 2.7.2010, Room SR 006:|
|10:00 - 10:30||Stefanie Hallmich||Fundamentals of modern control concepts|
|10:30 - 11:00||Silvio Paepke||Performance of the Domain Name System|
|11:00 - 11:15||Break|
|11:15 - 11:45||Alexander Bach||Peer-to-Peer in Practice|
|11:45 - 12:15||Jan Knipper||Greening Network Links|
|12:15 - 12:30||Matthias Wählisch||Summary|
Security in RFID Systems
In the last years the usage of RFID (Radio-frequency identification) based payment, identification and security applications became common in our everyday life. Students pay their meals the cafeteria and university staff is using RFID systems to lock their rooms. Even passports contain RFID chips today. The task is to evaluate the different types of RFID systems from an security perspective. 1. How do the systems prevent data manipulation? 2. How is the data protected from unauthorized access?
Assigned to: Mirjam Fabian [report, slides]
Packet Filtering in the Linux Kernel
Besides routing and forwarding, packet filtering and mangling are most important features of a modern network protocol stack. In this proseminar topic you have to introduce the following user space programs: iptables, ip6tables, arptables, ebtables. You have to discuss how these interact with the kernel and with each other. Provide examples and explain the syntax. Additionally, give an overview of nftables which might become the successor of xtables.
Assigned to: Marten Losansky [no report]
The Debian Packaging System
The Debian GNU/Linux distribution provides a sophisticated packaging system for users and developers. Dozens of tools are provided to ease the packaging procedure. In this proseminar topic you have to introduce the Debian package format and give an overview of the most important tools. Explain how these tools interact with each other and for which application scenario they are required. Discuss how packages can be created for other architectures and distributions.
Assigned to: Michael Krause [no report]
USB 3.0 makes happy?
USB 3.0 will be the new standard to establish communication between devices and a host controller (usually personal computers). Understand the technical details, requirements and the design of USB 3.0. Analyze whether and when USB 3.0 will be supported by MAC/PC/LINUX etc. Will USB 3.0 eliminate any other technologies on the market? Compare USB 3.0 with USB 2.0 and with at least one of the following technologies: FireWire, Power over Ethernet or eSATA. Investigate the supported min/max power support as well as cable length/type and the signaling rates. Have a look inside the data packet structure and give a brief overview.
Assigned to: Steffi Brandsch [no report]
Compare encryption algorithms: Skipjack vs. AES
Both algorithm are symmetric en- and decryption algorithms, but which is more secure and where are the limits. Explain both algorithms and create useful illustrations for both of them. Consider the runtime of each algorithm and memory usage. Consider the key length and analyze its influence on the grade of security. Which of these algorithms is better to be applied in WSNs to guarantee confidentiality.
Assigned to: Oliver Türpe [report, slides]
A survey of virtualization technologies
During the last years cheaper and faster hardware and new technologies like AMD-V or Intel VT has brought virtualization into focus for a lot of applications. Different concepts like paravirtualization, hardware assisted virtualization or containers were developed to virtualize whole networks on a single computer or perform live migrations of server appliances. Your task will be to give an overview about several virtualization technics and their implementations. What are the differences between products like VMWare ESX(i), Parallels, QEMU, KVM or Xen? You should point out which solution would be adequate for which purpose.
Assigned to: Martin Weigelt [report, slides]
Glättungsverfahren bzw. Filterung von Messdaten
Bei der Datenerfassung in der Messtechnik werden die Messwerte einer physikalischen Größe (z.B. Länge, elektrische Stromstärke, Temperatur,…) oft durch Ausreißer, Offsets, fehlende Werte oder andere Einflüsse verfälscht. Man unterscheidet zwischen systematischen und zufälligen Einflüssen:
Systematische Einflüsse: Die Messfehler geben der Messung einen Trend, wobei die Messwerte zeitlich oder örtlich zu/abnehmend sind. Oder sie weisen eine systematische Tendenz zu höheren oder niedrigeren Werten auf. Beispiele für systematische Messabweichungen sind: Nullpunktverschiebung (Offset), Schwankung der Helligkeit einer Lampe im Rhythmus der Netz-Wechselspannung, oder systematische Änderungen der Versuchsbedingungen.
Statistische oder zufällige Messabweichungen: Sie treten zufällig und unregelmäßig auf. Sie schwanken nach Betrag und Vorzeichen und sie sind innerhalb bekannter Grenzen vorhersehbar. Man bezeichnet sie als Rauschen und wird z.B. durch Einflüsse aus der Umgebung oder durch subjektive Unvollkommenheit des Experimentators (Unsicherheit beim Ablesen einer Skala oder Handhaben eines Messgeräts) hervorgerufen.
Filterverfahren werden verwendet, um spezielle Informationen hervorzuheben, aber auch, um Datenfelder zu glätten. Glättungsverfahren sind spezielle Fälle der Filterung, mit denen zufällige Schwankungen bei der Erfassung einer Messreihe reduziert werden können.
Ziel dieser Arbeit ist es, unterschiedliche Glättungsverfahren bzw. Filter (Mittelwert, Median,...) sowie ihren Rechenaufwand und ihre Echtzeitfähigkeit zu untersuchen. Es muss auch nachgeprüft werden, ob das jeweilige Verfahren einen Messwert oder mehrere Messwerte für die Glättung benötigt.
Assigned to: Dinh Bao Dang [no report]
Speichermodell der C Programmiersprache
Durch die Verbreitung von Mehrkernprozessoren müssen sich Programmierer vermehrt mit nichtsequentieller Programmierung auseinander setzen. Auch die Entwickler von Programmiersprachen müssen sich mit den Fragen der Nebenläufigkeit auseinander setzen. Dazu werden Speichermodelle entwickelt.
Hier soll über Speichermodelle und die aktuellen Entwürfe für ein Speichermodell der Programmiersprache C berichtet und deren Motivation erklärt werden.
Assigned to: Benjamin Valentin [report, slides]
Survey: Remote Reconfiguration Solutions for WSNs
Wireless Sensor Networks (WSNs), in their simplicity, pose an ambitious challenge to software engineers of today. Their application areas include: disaster relief situations, body area networks, assisted living, building automation, environmental sensing and reconnaissance. These seemingly diverse applications have some common requirements from proposed solutions i.e. they should be long lived, autonomous and resilient. Software development, on the other hand, is a cyclic process. This results in a need for post-deployment software-update to accommodate bug fixes, focus shifts, etc.
To collect all the nodes of the network for code update is often not possible and is always a hassle. Hence, nearly all the software development platforms for WSNs include the support for remote reconfiguration of the network. In this work, these solutions are to be studied and compared against each other. Finally, a set of requirements is to be synthesized for an ideal retasking solution.
Assigned to: Hagen Mahnke [no report]
Basic Concepts of Dependable Computing
The development of dependable systems requires a firm understanding of the behaviour of faulty systems. Cristian, Avizienis, Laprie, and many others have defined a framework for describing and analysing faults, errors and failures. Give an overview of the basic concepts and terminology, and illustrate their usefulness by discussing papers where they are (or are not) applied.
Assigned to: Astrid Koennecke (withdrawn)
Basics of Performability Analysis
Meyer et al. have developed performability analysis as a framework for analysing the performance of degradable systems. Present an introduction to the basic framework and illustrate its application.
Assigned to: TBA
The Byzantine Generals' Problem
Faults in computing systems may be classified by their severity, and by the allowed behaviour of a fault. Among these, so-called Byzantine faults, or Byzantine behaviour, are often considered the most general and most severe class. The Byzantine Generals' problem, introduced by Lamport et al., describes these faults using the metaphor of generals that need to arrive at a common decision. Give an introduction to the problem, its application, and the solution proposed by Lamport et al.
Assigned to: Esra Ünal [report, slides]
Random-Number Generation for Testbeds and Simulation
Testbeds and simulations are important tools in the development of new methods, systems and algorithms, as they allow experimentation under realistic, but controlled conditions. As real-world scenarios typically do not exhibit deterministic behaviour, pseudo random numbers are required for such experiments. For these purposes, pseudo random number generators (PRNGs) are used. Give an overview of the requirements for PRNGs in testbeds and simulations and present a suitable PRNG algorithm from the literature.
Assigned to: Phillip Berndt [report, slides]
Security in Gaming Consoles
Manufacturers of gaming consoles sell licenses to game developers that allow them to produce games for their consoles. As game developers have no interest in having their games freely copied without getting paid, console manufacturers invest heavily in the development of sophisticated copy protection mechanisms. Most of them get cracked open pretty fast, some, like the one used on the Playstation 3, not. The focus of this topic is to give an overwiew of the copy protection techniques used in gaming consoles today, describing the methods used to defeat them.
Assigned to: Simon Philipp Hohberg (withdrawn)
Exploration of NAT Deployments
Network Address Translation (NAT) is a mechanism that rewrites IP addresses in the IP header. Typically, it is used to face the problem of exhausted IPv4 addresses. One or several (private) IP addresses may be covered by a single (public) IP address. This approach operates at the disadvantage of the end-to-end paradigm as explicit states are established at NAT boxes. However, at least almost all home users are connected to their ISPs via a NAT gateway. The goal of the work is to present studies that measure the deployment of NATs. How were the measurements conducted? What are the key findings (e.g., is there a correlation between countries and NAT deployments)?
Assigned to: Christian Windolf (withdrawn)
Performance of the Domain Name System
The Domain Name System (DNS) is a crucial part in our daily Internet-based communication. Humans usually use names instead of IP addresses to communicate with other parties at the application layer. The performance of the DNS has a direct impact on our 'delay experience': Requesting a web site, for example, takes a long time if the name resolving of the URL is slow. In this work, performance measurement studies of the DNS should be analyzed and discussed from the client perspective. "What is the typical delay portion that belongs to the DNS during a web site request?" represents here one problem.
Assigned to: Silvio Paepke [report, slides]
Peer-to-Peer in Practice
Peer-to-Peer (P2P) applications (e.g., BitTorrent) are widely deployed in the current Internet. This is mainly due to the idea of P2P techniques, which provide end users with a high flexibility to deploy new services without relying on dedicated infrastructure components. This work should shortly introduce structured and unstructured P2P networks, in each case based on an implementation. The focus of the work, however, should lie on an overview of deployed P2P applications. Applications should be classified based on their P2P substrate (structured vs. unstructured) and their communication model (unicast vs. multicast).
Assigned to: Alexander Bach [report, slides]
Monitoring the Internet Backbone Routing
The current inter-domain routing protocol is BGP. BGP implements routing between autonomous systems (ASes). A BGP speaker announces IP prefixes which it is responsible for. The Internet AS-level connectivity is though vulnerable for incorrect routing updates. The intended connectivity propagated by an Internet Service Provider (ISP) may look differently outside of the ISP network due to misconfigurations or attacks. The goal of this work is to give an overview of current global routing monitoring systems. A special focus should be dedicated to the Cyclops system developed by the UCLA. This may include hands-on tests.
Assigned to: TBA
Visualizing the Internet
The Internet can be modelled as a graph. Nodes may be represented by IP prefixes, Autonomous System numbers, or routers. Visualizing a network with the size of the Internet is an intricate task with respect to processing time and memory constraints, as well as a clear arrangement of the provided information. This work should analyze existing visualization projects and discuss their techniques to create a graphical representation of the Internet. Are there any other approaches in addition to a pure graph layout? Which data can be extracted from Internet measurements and used for visualization? Some of the existing tools are freely available, and you are encouraged to test some of them.
Assigned to: David Goldwich [no report]
Greening Network Links
A hot topic within the Internet is Green IT. The idea is to optimize software and hardware to reduce power consumption. This can be achieved by applying a suitable communication model or modifying protocol behaviour, for example. This work should focus on the modeling of the power consumption per link with respect to the current link utilization.
Assigned to: Jan Knipper [report, slides]
Gossip-based information dissemination in networks
Different gossip-based protocols exist to spread information throughout a network. The authors analyse a combined push-pull protocol and provide a set of insightful results. Purpose of this seminar project is to understand and present the protocol and the questions and answers provided for the protocol rather than capturing in detail the mathematical analysis.
Assigned to: Hartono Sugih (withdrawn)
Performance-security tradeoff in mobile ad hoc networks
he authors investigate performance characteristics of secure group communication systems in mobile ad hoc networks that employ intrusion detection techniques for dealing with insider attacks tightly coupled with rekeying techniques for dealing with outsider attacks. The objective is to identify optimal settings including the best intrusion detection interval and the best batch rekey interval under which the system lifetime (mean time to security failure) is maximized while satisfying performance requirements.
Assigned to: TBA
Fundamentals of modern control concepts
The Nintendo Wii and the Apple iPhone overcame the need for ever-faster hardware by revolutionizing the control concepts. The success is based on the use of well-known sensoric in a new context. Find other examples, introduce the underlying techniques, and explain in which human-machine interactions they have been used before they were introduced in entertainment products. Develop new concepts if you can.
Assigned to: Stefanie Hallmich [report, slides]