Abgeschlossene Dissertationen

37Abschlussarbeit(en)

Schlipf, Lena: Stabbing and Covering Geometric Objects in the Plane

Betreuer: Christian Knauer , Prof. Dr. Helmut Alt
Abschluss: PhD
Abgabedatum: 11.09.2013
Homepage des Autors:

In this thesis, we consider a variety of different geometric covering and stabbing problems in the plane. Stabbing and covering geometric objects are two widely studied problems in computational geometry. These problems are closely related; there are many cases where covering problems are dual to stabbing problems. 

We first study a problem that was posed by Tamir in 1987: "Given a set of geometric objects in the plane, can one decide in polynomial time whether there exists a convex polygon whose boundary stabs every object ?" This boundary is then called a convex stabber. We give an answer to this question by proving that deciding the existence of a convex stabber is NP-hard for many types of geometric objects. Additionally, we consider an optimization version and prove it to be APX-hard for most of the considered objects.

A similar problem is deciding whether geometric objects can be stabbed with the vertices of a rotated, scaled and translated copy of a given polygon. To the best of our knowledge, this problem was not studied so far and we present the first polynomial-time algorithm.

Another stabbing problem studied in this thesis, is the problem of stabbing sequences of geometric objects: Given a distance measure and two sequences of geometric objects, compute two point sequences that stab them under the condition that the distance between these point sequences is as small as possible (using the given distance measure). We state efficient algorithms for this problem where the objects are either line segments or disks and the distance measure is the discrete Fréchet distance. 

Then, we consider covering problems. We study a new version of the two-center problem where the input is a set D of disks in the plane. We first study the problem of finding two smallest congruent disks such that each disk in D intersects one of these two disks. Then, we study the problem of covering the set D by two smallest congruent disks. We also investigate an optimization version. For these problems, we give efficient exact and approximation algorithms. 

Finally, we investigate the problem of computing a largest area rectangle inscribed in a convex polygon on n vertices. If the order of the vertices of the polygon is given, we state approximation algorithms whose running times are only logarithmically dependent on n.

Scholz, Sven: Ahnlichkeit von Formen

Betreuer: Prof. Dr. Helmut Alt
Abschluss: PhD
Abgabedatum: 18.05.2011
Homepage des Autors:

The goal of the present work was to develop a system for automated similarity retrieval of figurative images---especially trademark images---which gives results that resemble human similarity estimation. 

In the first chapter, findings about the peculiarities of the perception of images and about human similarity estimation are compiled and the special needs of similarity retrieval of trademark images are explained.

As the depicted shapes play an important role for the estimation of similarity, an approach for the detection of the shapes has been developed. It encounters that shapes may be depicted in different ways (by regions, using textures, by contour lines) and that images often contain compression artefacts and noise.

For the estimation of the similarity of images based on the detected shapes, an approach has been developed that, in a first stage, computes transformations which map the images and, in a second stage, compares the mapped images. For the computation of the transformations an existing randomized approach has been enhanced. It chooses appropriate transformations based on collecting votes. For the comparison of the mapped images a new similarity measure on the contour lines has been developed which takes the correspondences in position and direction into account.

Based on these components a system for similarity retrieval has been developed which also considers the special needs of similarity retrieval of trademark images. The experimental results show a high conformance with human similarity estimation. The results are significantly better than the ones achieved by existing systems.

Wernisch, Lorenz: Dominanzprobleme für Punktmengen

Abschluss: PhD
Abgabedatum: 01.06.1994
Homepage des Autors:

Ribó Mor, Ares: Realization and Counting Problems for Planar Structures: Trees and Linkages, Polytopes and Polyominoes

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 15.02.2006
Homepage des Autors:

Part I of my thesis is about planar linkages. We consider motions of linkages that avoid crossings of bars. We study problems related to self-touching frameworks, in which multiple edges converge to geometrically overlapping configurations. Chapter 2 is about the unfoldability of trees. We show that every monotone tree is unfoldable. A δ-perturbation of a self-touching configuration is a repositioning of the vertices within disks of radius δ, which is consistent with the combinatorial embedding in R2. In Chapter 3 we prove that every self-touching configuration can be perturbed within δ. The classical Maxwell-Cremona Theorem is a powerful tool that establishes a bijection between the set of classical equilibrium stresses of a planar configuration and the set of three-dimensional polyhedral terrains that project onto it. In Chapter 4 we present a generalization of the Maxwell-Cremona Correspondence for self-touching configurations and generalized polyhedral terrains.

Part II is about the number of spanning trees of a planar graph with applications to the embedding of polytopes on small integer grids using the Maxwell-Cremona lifting. In Chapter 5 we give lower and upper bounds for the maximum number of spanning trees. We present a new method based on transfer matrices for computing the asymptotic number of spanning trees of recursively constructible families of graphs. We discuss several techniques for obtaining upper bounds. Apart from the general case, we study the particular cases when the graph has smallest face cycle 4 and 5, for which the best results are obtained using a probabilistic method. These results are used in Chapter 6 for obtaining improved bounds on the minimum size of the integral grid in which all combinatorial types of 3-polytopes can be embedded.

In Part III we analyze, using numerical methods, the growth in the number of polyominoes on a twisted cylinder as the number of cells increases. These polyominoes are related to classical polyominoes (connected subsets of a square grid) that lie in the plane. We thus obtain improved lower bounds on the growth rate of the number of these polyominoes, which is also known as Klarner's constant.

Buchin, Kevin: Organizing Point Sets: Space-Filling Curves, Delaunay Tessellations of Random Point Sets, and Flow Complexes

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 19.12.2007
Homepage des Autors:

In this thesis we develop and analyze algorithms for computing space-filling curve orders, Delaunay Tessellations and flow complexes of point sets. For space-filling curve orders and Delaunay Tessellations the emphasis lies on an average-case analysis of the algorithms. For flow complexes the emphasis lies on their computation in higher dimensions. In a space-filling curve order of a point set, points which are close in the order are also close in space. We discuss algorithms for computing space-filling curve orders based on radix sort. We give an average-case analysis which shows that these orders can be computed in linear expected time for many point distributions. As discrete counterparts of space-filling curves we consider grid traversals and discuss finding optimal grid traversals for different locality measures using heuristics for the quadratic assignment problem. The Delaunay Tessellation of a point set is a simplicial complex capturing proximity relations of the points. We analyze incremental constructions of Delaunay Tessellations along space-filling curve orders. First we give a generalized and refined analysis of incremental constructions con BRIO, i.e., where points are inserted in random rounds. Based on this we analyze incremental constructions along space-filling curve orders for uniformly distributed points from a bounded convex region in the plane, normally distributed points in the plane, and uniformly distributed points from a d-cube in higher dimensions. In the first case we analyze the expected structure of the Delaunay Tessellation and in the other cases the structure of the space-filling curve order. We show for these point distributions that incremental constructions con BRIO of Delaunay Tessellations run in linear expected time using space-filling curve orders. The flow complex of a point set is the collection of stable manifolds of the flow induced by the distance function of the point set. We give an algorithm for computing the flow complex in higher dimensions. The algorithm is based on the Delaunay Tessellation and Voronoi Diagram of the point set and the recursive nature of the flow. Based on this algorithm we give a topological analysis of flow shapes, i.e., the underlying spaces of subcomplexes of the flow complex. In particular we show that flow shapes are homotopy equivalent to the corresponding unions of balls.

Lenz, Tobias: Simple Reconstruction of Non-Simple Curves and Approximating the Median in Streams with Constant Storage

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 28.10.2008
Homepage des Autors:

This work generalizes the ideas in the Nearest-Neighbor-Crust algorithm by Dey and Kumar. It allows to reconstruct smooth, closed curves from ε-samples with ε ≤ 0.48. This is a big improvement compared to the original bound. Further generalization leads to a new algorithm which reconstructs closed curves with self-intersections. The algorithm is very simple and short and works well in practice. A special ε-sampling condition is given which guarantees correct results. The described method works for curves in any dimension d in O(n^(2−1/d)) time.
---
In this part the well-known problem of finding the median of an ordered set is studied under a very restrictive streaming model with sequential readonly access to the data. Only a constant number of reference objects from the stream can be stored for comparison with subsequent stream elements. A first non-trivial bound of Omega(√n) distance to the extrema of the set is presented for a single pass over streams which do not reveal their total size n. This result is extended to an algorithm which guarantees a distance of Omega(n^(1−ε)) to the extrema. Additional results about upper bounds, multi-pass algorithms, and arbitrary quantiles are presented.

Formann, Michael: Algorithms for Geometric Packing and Scaling Problems

Abschluss: PhD
Abgabedatum: 01.12.1992
Homepage des Autors:

Knauer, Christian: Algorithms for Comparing Geometric Patterns

Betreuer: Prof. Dr. Helmut Alt
Abschluss: PhD
Abgabedatum: 02.05.2002
Homepage des Autors:

Das geometrische Mustererkennungsproblem besteht darin, zu zwei gegebenen Mustern P und Q aus einer Menge von zulässigen Mustern Π und einem Abstandsmaß δ für solche Muster eine Transformation τ aus einer Menge von zulässigen Transformationen T zu finden, so daß der Abstand δ(τ(P),Q) möglichst klein wird. In der Dissertation werden effiziente Algorithmen für verschiedenen Varianten dieser Problemstellung vorgestellt; die Ergebnisse lassen sich anhand der Art der zulässigen Muster gruppieren: Punktmuster im Rd: Im ersten Teil betrachten wir Punktmuster; die Figuren sind Mengen von m bzw. n Punkten im Rd. Kongruenztest: Wir geben einen Algorithmus an, der in O(n ⌈d/3 ⌉ log n) Zeit (m < n) entscheidet, ob es eine Kongruenzabbildung gibt, die P auf Q abbildet (dies kann als Spezialfall der Mustererkennungsproblems betrachtet werden, bei dem das Abstandsmaß die triviale Metrik ist). Hausdorff-Abstand: Wir beschreiben den ersten nicht-trivialen Algorithmus zur Berechnung des gerichteten Hausdorff-Abstandes h(P,Q) von einer Menge von Punkten P zu einer Menge von semialgebraischen Mengen konstanter Beschreibungskomplexität Q (dies kann als Spezialfall der Mustererkennungsproblems betrachtet werden, bei dem die Identität die einzige zulässige Transformation ist); die Laufzeit ist Oε(m nε log m+m1+ε-1/(2d-2) n). Planare Kurven: Der zweite Teil beschäftigt sich mit Mustererkennungsproblemen für polygonale Kurven in der Ebene; die Figuren sind Polygonzüge P,Q mit m bzw. n Ecken und als Abstandsmaß betrachten wir den Frechét-Abstand F(P,Q) von P und Q. Matching unter Translationen: Wir entwickeln den ersten Algorithmus, der das Matchingproblem für Polygonzüge bezüglich des Frechét-Abstandes unter Translationen löst; die Laufzeit ist O((m n)3(m+ n)2). Außerdem geben wir einen O(ε-2 m n) Approximationsalgorithmus der Güte (1+ε) für dieses Problem an, indem wir Referenzpunktmethoden verallgemeinern. Weiterhin zeigen wir, daß es für affine Abbildungen keine solchen Referenzpunkte gibt. Hausdorff- vs. Frechét-Abstand: Wir zeigen, daß für eine gewisse Klasse von Kurven ein linearer Zusammenhang zwischen dem Frechét- und dem Hausdorff-Abstand besteht. Für diese Art von Kurven geben wir einen O((m+ n) log2(m+ n) 2α(m+ n)) Approximationsalgorithmus zur Berechnung von F(P,Q) an. Schließlich beschreiben wir den ersten nicht-trivialen Algorithmus um solche Kurven zu erkennen; die Laufzeit ist O(n log2 n). Einfache polyedrische Flächen im R3: Im letzten Teil betrachten wir Muster die sich aus Mengen P und Q von m bzw. n disjunkten Dreiecken im R3 zusammensetzen. Hausdorff-Abstand: Wir entwickeln einen Algorithmus, der H(P,Q), den Hausdorff-Abstand zwischen P und Q, in Oε((m n)15/16+ε (m17/16+ n17/16)) Zeit berechnet.

Schulz, André: Lifting Planar Graphs to Realize Integral 3-Polytopes and Topics in Pseudo-Triangulations

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 05.06.2008
Homepage des Autors:

Lifting planar embeddings with equilibrium stress is a well known method that dates back to the 19th century. We discuss the known theory about liftings and develop a framework that allows us to apply the lifting technique easily. In this thesis we apply the lifting method to different geometric problems.

As a first application we show how to embed 3-polytopes with small integer coordinates. Our method improves the upper bound for the size of the largest coordinate from O(2^{18n^2}) to O(2^{7.55n}). A new generalized version of Tutte's
spring embedding assures that a planar 2d-embedding contains an
equilibrium stress and is therefore liftable. We point out 
connections between the size of the integral embedding and the number of
maximal spanning forests a planar graph can have.

The second field of applications for the lifting technique are topics
about pseudo-triangulations. 
Our main observation shows how to model regular 
triangulations as linear programs over the polytope of pointed
pseudo-triangulations. 
We introduce an equivalent of the Delaunay triangulation for pointed
pseudo-triangulations of simple polygons. 
Our approach is motivated by the paraboloid lifting of the Delaunay
triangulation and the generalization of linear programs that
compute the Delaunay triangulation in special cases.
We also investigate the so-called canonical pointed pseudo-triangulation and study some of its
geometric properties. Our observations lead to a new characterization
of pointed pseudo-triangulations as embeddings of minimal rigid graphs
that can balance a given load with positive edge weights. 

The thesis contains also results on pseudo-triangulation problems that
were not obtained with help of liftings. We show that a sequence of 
super-polynomial many convexifying flips exists that transform a lifted
pseudo-triangulation into a maximal locally convex surface. This is
obtained by constructing a simple polygon that
realizes an improving flip sequence of length n^{\Theta(\log n)} between two of its
pointed pseudo-triangulation.
Furthermore we show that (1) it is NP-hard to decide if a graph
contains a pseudo-triangulation and (2) it is NP-hard to decide if a
graph can be extended to a pseudo-triangulation with small vertex degree. Both
decision problems are studied in different incarnations.
We obtain a new and easier NP-completeness proof of the triangulation 
existence problem, one of the classic NP-complete triangulation problems.

Blömer, Johannes: Simplifying Expressions Involving Radicals

Abschluss: PhD
Abgabedatum: 01.07.1993
Homepage des Autors:

Alberts, David: Dynamic Connectivity in Graphs: Theory and Practice

Betreuer: Emo Welzl
Abschluss: PhD
Abgabedatum: 15.02.1996
Homepage des Autors:

Referenzen

  1. D. Alberts. Implementation of the dynamic connectivity algorithm by Monika Rauch Henzinger and Valerie King. TR B 95-10, Freie Universität Berlin, Inst. f. Informatik, 1995.
  2. D. Alberts, G. Cattaneo, G. F. Italiano. An empirical study of dynamic graph algorithms. In Proc. 7th SODA, p. 192 - 201, 1996.
  3. D. Alberts, M. R. Henzinger. Average case analysis of dynamic graph algorithms. In Proc. 6th  SODA, p. 312 - 321, 1995. (Technical Report)
  4. D. Eppstein, Z. Galil, G. F. Italiano, A. Nissenzweig. Sparsification - a technique for speeding up dynamic graph algorithms. In Proc. 33rd FOCS,p. 60 - 69, 1992.
  5. G. N. Frederickson. Data structures for on-line updating of minimum spanning trees, with applications. SIAM J. Comput., 14:781 - 798, 1985.
  6. M. R. Henzinger, V. King. Randomized dynamic graph algorithms with polylogarithmic time per operartion. In Proc. 27th STOC, p. 519 - 527, 1995.

Valtr, Pavel: Planar point sets with bounded ratios of distances

Betreuer: Emo Welzl
Abschluss: PhD
Abgabedatum: 01.01.1994
Homepage des Autors:

Jaume Deyà, Rafel: Tessellations for Geometric Optimization

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 20.10.2014
Homepage des Autors:

This thesis is concerned with the study of some tessellations (or subdivisions) of the plane or of the space and their relation to some optimization problems. Several of the results have a combinatorial flavor, whereas others are strongly connected to the geometry underlying the corresponding problems. The work combines theoretical statements with applied implications and related algorithms, making use of linear algebra, convex geometry, graph theory and many other tools from discrete and computational geometry. 

Regular subdivisions are tessellations resulting from the projection of the lower faces of a polyhedron. In the first part of this thesis, we generalize regular subdivisions introducing the class of recursively-regular subdivisions. Informally speaking, a recursively-regular subdivision is a subdivision that can be obtained by splitting some faces of a regular subdivision by other regular subdivisions (and continue recursively). We also define the finest regular coarsening and the regularity tree of a subdivision. We derive several properties of these two objects, which reflect certain structure in the class of non-regular subdivisions. In particular, the finest regular coarsening of a subdivision is the regular subdivision that is (in a sense) most similar to it. We show that the class of recursively-regular subdivisions is a proper superclass of the regular subdivisions and a proper subclass of the visibility-acyclic subdivisions (in the sense of an acyclicity theorem by Edelsbrunner). We also show that there exist point sets whose recursively-regular triangulations are not connected by geometric bistellar flips. 

We then derive several algorithms related to the studied objects, and point out applications of the main results. In particular, we present relations to tensegrity theory, data visualization, and graph embedding problems. Special attention is paid to the problem of covering the space by placing given floodlights at given points, for which we extend results known since 1981 and discuss two variants of the original problem. 

The second part is concerned with the study of optimal partial matchings for pairs of point sets under translations. First, we regard the least-squares cost function. The best approach to this problem so far is to construct (and explore) a particular tessellation of the space of translations. In every tile of the tessellation there is one matching that is optimal for any position of the point sets corresponding to a translation in that tile. We give the first non-trivial bound on the complexity of this tessellation in dimensions two and higher, and study several structural properties that lead to algorithms whose running time is polynomial in the size of the larger set. 

We address then the analogous problem under the bottleneck cost function. This cost function assigns to every matching the largest distance defined by a matched pair of points. An associated tessellation is shown to have polynomial complexity. This result, together with graph-theoretical tools, allows us to obtain efficient algorithms for the computation of the corresponding minimum under translations that are sensitive to the size of the smaller of the two sets. The lexicographic variant of the bottleneck cost is analyzed as well. 

Finally, we explore natural directions for the generalization of the problems of matching under translations to which many of our results extend.

Heinrich-Litan, Laura: Exact L_infinity Nearest Neighbor Search in High Dimensions

Betreuer: Prof. Dr. Helmut Alt
Abschluss: PhD
Abgabedatum: 04.11.2002
Homepage des Autors:

In this thesis we consider the nearest-neighbor problem, which is defined as follows: given a fixed set P of n data points in some metric space X, build a data structure such that for each given query point q a data point from P closest to q can be found efficiently. The underlying metric space is usually the d-dimensional real space Rd together with one of the Lp-metrics, 1<= p <=∞. In many applications, the dimension d of the search space is quite high and can reach several hundreds or even several thousands. Therefore, running times and storage requirements exponential in d are prohibitive in these cases. Because of their exponential dependence on the dimension, all known techniques for exact nearest-neighbor problem are in fact in high dimensions not competitive with the brute-force method, which just determines the distance of q to each point in P and selects the minimum. 
This thesis presents algorithms for solving the high-dimensional exact nearest-neighbor problem with respect to the L-distance. We analyze the average-case situation when the data points are chosen independently at random under uniform distribution. The algorithms considerably improve the brute-force method, they are simple and easy to implement. 
In Chapter 2 we consider query algorithms that need no preprocessing and require storage only for the point set P. Their average running time is O( n+(nd / ln(n)) )
In Chapter 3 we present two strategies which speed up the search by using preprocessing. The query algorithm introduced in Section 3.1.2 requires linear storage and has an expected running time of O(n ln(d / ln( n)+1)+n). The data structure developed in Section 3.2 is based on a preprocessed partition of the data set into sequences, which are monotone with respect to some of the dimensions. The query algorithm has an expected running time of O( √dn1-1/√dln(n)) for dimensions d<(ln(n)/ln(ln(n)))2
Chapter 4 presents several generalizations, in particular to the important problem of finding the k nearest neighbors to a query point. We generalize the analysis of the considered algorithms to other "well-behaved" probability distributions. Furthermore, we develop extensions of the algorithms which work efficiently in the external-memory model of computation. 
In Chapter 5 we present a method which provides tradeoffs between the space complexity of the data structure and the time complexity of the query algorithm.

Kim, Heuna: Congruence testing for Point Sets in 4-Space

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 20.06.2016
Homepage des Autors:

Congruence is the geometric concept of being the same up to rotations and translations
in Euclidean space. As congruence is a fundamental concept in geometry, it has drawn broad
attentions from the computational geometry community for a long time whether the curse of
dimensionality applies to congruence testing. We developed a deterministic optimal-runningtime
algorithm for congruence testing in 4-space.
To understand the importance of the main algorithm in the historical context, we provide a
survey about the computational model and the previous work on congruence testing algorithms.
The crucial ingredients of the algorithm are explained component by component. These include
general 4-dimensional rotations, angles between linear subspaces, and the Plücker embedding. In the sequence of steps in the algorithm, high regularities are forced in the structure of point sets.
This lets us encounter beautiful mathematical structures on a 3-sphere and the symmetry group
of finite points: the Hopf fibration of a 3-sphere and the Coxeter group of four-dimensional point
groups. We also give an elementary and self-contained overview about these two mathematical
topics.
The main algorithm consists of five modules that are interesting in their own right. The
algorithm is complicated and we provide rather pessimistic estimates. This algorithm, however,
can be regarded as a big step forward to constructing a more efficient algorithm in higher
dimensions.
In the same vein, the last part is devoted to the extendability of the algorithm to higher
dimensions. This part concludes with discussing implementability and geometric properties that
the algorithm may imply.

Stehn, Fabian: Geometric Hybrid Registration

Betreuer: Christian Knauer
Abschluss: PhD
Abgabedatum: 14.02.2011
Homepage des Autors:

Geometric matching problems are among the most intensely studied fields in Computational Geometry. A geometric matching problem can be formulated as follows: given are two geometric objects P and Q. These objects are taken from a class of geometric objects G and P is called the "pattern" and Q is called the "model". A geometric matching instance is defined for a distance measure dist and a transformation class T. The task is to find the transformations t of T that minimizes dist(t(P),Q).

In this thesis, geometric hybrid registration problems are studied. Registration problems are closely related to geometric matching problems. The term geometric registration problem describes the task of mapping points from one space ("pattern space") to their corresponding points in a deformed copy of that space called "model space".

This research is motivated by a real world application: navigated surgery. Here, the goal is to register an operation theatre space (pattern space) to the internal coordinate system (model space) of a medical navigation system. The purpose of a medical navigation system is to support surgeons by visualizing the used surgical instruments at their correct position in a 3D-model of a patient. The models are generated beforehand based on CT or MRT scans.

Hybrid registration is a novel strategy to compute solutions for this alignment problem. Geometric hybrid registrations reduce the spatial synchronization problem to a series of (at least two) geometric matching problems that are solved interdependently. Usually, a computationally involved point-to-surface matching is combined with a comparably simpler but underdefined point-to-point matching. The point-to-surface matching is computed for a sufficiently large and suitably distributed set of points (called surface points) measured in the pattern space to a geometric surface in the model space. For the point-to-point matching, a small set of (one to three) characteristic points are measured in the pattern space and are defined in the model space. In the context of the intended application, these points are called anatomic landmarks - anatomically exposed spots within the field of interest.

Wenk, Carola: Geometrische Mustererkennung in höheren Dimensionen

Betreuer: Prof. Dr. Helmut Alt
Abschluss: PhD
Abgabedatum: 16.12.2002
Homepage des Autors:

The comparison of geometric shapes is a task which naturally arises in many applications, such as in computer vision, computer aided design, robotics, medical imaging, etc. Usually geometric shapes are represented by a number of simple objects (sites) that either describe the boundary of the shape, or the whole shape itself. Sites are often chosen to be linear objects, such as line segments, triangles, or simplices in general, since linear objects are easier to handle in algorithms. But sometimes also patches of algebraic curves or surfaces, such as circular arcs or portions of spheres or cylinders are of interest. In order to compare two shapes we need to have a notion of similarity or dissimilarity, which arises from the desired application. There is a large variety of different similarity measures. Popular similarity notions are, for example, the Hausdorff distance, the area of symmetric difference, or especially for curves the turn-angle distance, or the Fréchet distance. The application usually supplies a distance measure, and furthermore a set of allowed transformations, and the task is to find a transformation that, when applied to the first object, minimizes the distance to the second one. Typical transformation classes are translations, rotations, and rigid motions (which are combinations of translations and rotations).

The contribution of this thesis consists of several algorithms for matching simplicial shapes in dimensions d >= 2. The shapes are either represented as sets of simplicial objects or as polygonal curves with a given parametrization. The considered distance measures are mainly the Hausdorff distance and the Fréchet distance. In the literature most matching algorithms either attack two-dimensional problems, or consider finite point sets in higher dimensions. In the first half of this thesis we present results for the Hausdorff distance in d >= 2dimensions under translations, for a rather general notion of simplicial shapes, as well as for some special shape classes which allow to speed up the computations. In the second half of this thesis we investigate the Fréchet distance for polygonal curves. The Fréchet distance is a natural distance measure for curves, but has not been investigated much in the literature. We present the first algorithms to optimize the Fréchet distance under various transformation classes for polygonal curves in arbitrary dimensions. In the last chapter we consider a partial matching variant in which a geometric graph and another curve are given, and we show how to find a polygonal path in the graph which minimizes the Fréchet distance to the curve.

Dimitrov, Darko: Geometric Applications of Principal Component Analysis

Betreuer: Klaus Kriegel
Abschluss: PhD
Abgabedatum: 08.12.2008
Homepage des Autors:

Bounding boxes are used in many applications for simplification of point sets or complex shapes. For example, in computer graphics, bounding boxes are used to maintain hierarchical data structures for fast rendering of a scene or for collision detection. Additional applications include those in shape analysis and shape simplification, or in statistics, for storing and performing range-search queries on a large database of samples. 

A frequently used heuristic for computing a bounding box of a set of points is based on principal component analysis. The principal components of the point set define the axes of the bounding box. Once the axis directions are given, the dimension of the bounding box is easily found by the extreme values of the projection of the points on the corresponding axis. Computing a PCA bounding box of a discrete point set in $\mathbb{R}^d$ depends linearly on the number of points. The popularity of this heuristic, besides its speed, lies in its easy implementation and in the fact that usually PCA bounding boxes are tight-fitting.

In this thesis we investigate the quality of the PCA bounding boxes. We give bounds on the worst case ratio of the volume of the PCA bounding box and the volume of the minimum volume bounding box. We present examples of point sets in the plane, where the worst case ratio tends to infinity. In these examples some dense point clusters have a big influence on the directions of the principal components, and the resulting PCA bounding boxes have much larger volumes than the minimal ones. To avoid the influence of such non-uniform distributions of the point sets, we consider PCA bounding boxes for continuous sets, especially for the convex hulls of point sets, obtaining several variants of continuous PCA. For those variants, we give lower bounds in arbitrary dimension, and upper bounds in $\mathbb{R}^2$ and $\mathbb{R}^3$. To obtain the lower bounds, we exploit a relation between the perfect reflective symmetry and the principal components of point sets. Each of the upper bounds in $\mathbb{R}^2$ and $\mathbb{R}^3$ is obtained from two parameterized bounds. The first bound is general for all bounding boxes, while to obtain the second bound, we exploit some of the properties of PCA, combining them with ideas from discrete geometry and integral calculus.

The relation between the perfect reflective symmetry and the principal components of point sets, leads to a straightforward algorithm for computing the planes of symmetry of perfect and approximate reflective symmetric point sets. For the same purpose, we
present an algorithm based on geometric hashing.

Denner-Broser, Britta: Complex Tracing

Betreuer: Prof. Dr. Helmut Alt , Ulrich Kortenkamp
Abschluss: PhD
Abgabedatum: 01.12.2008
Homepage des Autors:

Dynamic Geometry is the field of interactively doing geometric constructions
using a computer. Usually, the classical ruler-and-compass constructions are
considered. The available tools are simulated by the computer. A Dynamic Geometry
System is a system to do geometric constructions that has a drag mode.
In the drag mode, geometric elements with at least one degree of freedom can
be moved, and the remaining part of the geometric construction adjusts automatically.
Thus, the computer has to trace the paths of the involved geometric
objects during the motion.

In this thesis, we focus on the beautiful model by Kortenkamp and Richter-Gebert
that is the foundation of the geometry software Cinderella. We embed an algebraic
variant of this model into different fields of pure and applied mathematics,
which leads to different approaches for realizing the drag mode practically. We
develop a numerical method to solve the Tracing Problem that is based on a
generic Predictor-Corrector method. Like most numerical methods, this method
cannot guarantee the correctness of the computed solution curve, hence ambiguities
are not treated satisfactorily. To overcome this problem, we develope
a second algorithm that uses interval analysis. This algorithm is robust, and
the computed step length is small enough to break up all ambiguities. Critical
points are bypassed by detours, where the geometric objects or the corresponding
variables in the algebraic model can have complex coordinates. Here, the final
configuration depends essentially on the chosen detour, but this procedure due to
Kortenkamp and Richter-Gebert leads to a consistent treatment of degeneracies.
We investigate the connection of the used model for Dynamic Geometry to Riemann
surfaces of algebraic functions.

Buchin, Maike: On the computability of the Frechet distance between triangulated surfaces

Betreuer: Prof. Dr. Helmut Alt
Abschluss: PhD
Abgabedatum: 23.04.2007
Homepage des Autors:

The Frechet distance is a metric for parameterized curves and surfaces. It is used in shape matching for measuring the similarity of geometric shapes. For polygonal curves, it can be computed in polynomial time. For triangulated surfaces, deciding whether the Frechet distance between two surfaces is less than or equal a given threshold is NP-hard. It is not known, whether the Frechet distance between triangulated surfaces is computable. In this thesis, we study the computability of the Frechet distance between triangulated surfaces. We give three partial answers to the question whether it is computable. For triangulated surfaces, we show that the Frechet distance is semi-computable, a weaker notion of computability. For a variant of the Frechet distance, the weak Frechet distance, we show that it is polynomial time computable for triangulated surfaces. For a restricted class of surfaces, simple polygons, we show that the Frechet distance is polynomial time computable. Finally, we study a related question, the definition of a summed or average Frechet distance between curves. We show that none of several intuitive definitions fulfill the triangle inequality.

Klein, Oliver: Matching Shapes with a Reference Point

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 21.04.2008
Homepage des Autors:

Shape matching is an important topic in computational geometry, 
computer vision, image retrieval, object recognition and robotics. For a fixed 
distance measure D and a class of transformations T we can describe the 
problem as follows: Given two shapes A and B in R^d, find a 
transformation T^* in T, such that the distance between A and T^*(B) is 
minimal among all transformations in T. Usually finding such a transformation 
is computationally expensive, if at all possible. Thus we concentrate on 
approximation algorithms. We follow an approach by Alt, Behrends and Blömer, 
and Alt, Aichholzer and Rote. They use mappings called reference points to fix 
the relative position between the two sets. A reference point is a 
Lipschitz continuous mapping from the set of shapes into R^d which is 
equivariant under the considered class of transformations.
This approach reduces the degrees of freedom of the underlying problem
by the dimension d.

In this thesis we study approximation algorithms for shape matching
with respect to various metrics, e.g., the Hausdorff distance, the
Earth Mover's Distance, the Monge-Kantorovich Distance and the
bottleneck distance. We investigate translations, rigid motions,
i.e., combinations of translations and rotations, and similarities,
i.e., combinations of rigid motions and scalings.

The basic structure of the approximation algorithms is the same for all
metrics and we describe our approach in an abstract reference point
framework. We first determine the relative position of the two shapes
to each other by computing their reference points. We then translate
the shapes such that the reference points coincide. Next we
determine a rotation for one of the shapes such that the distance of
the two shapes is at most a constant times
their optimal distance. Similarities can always
be dealt with by finding an approximate scaling before finding the
rotation.

Wolfers, Barbara: Oberflächenrekonstruktion aus Querschnitten

Betreuer: Emo Welzl
Abschluss: PhD
Abgabedatum: 29.11.1996
Homepage des Autors:

Sturm, Astrid: Geometric Approximations in Two- and Three Dimensional Space

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 27.11.2009
Homepage des Autors:

The goal of geometric approximation is to replace a given complex geometric object by a simpler object while capturing the significant features of the original. 
In the first part of the thesis we deal with approximating polygonal curves in 2-dimensional space. For a polygonal curve this approximation can be done either by a simpler polygonal curve (a curve with less segments) or by a higher order curve. We were able to compute an approximation of open polygonal curves with the minimum number of circular arcs (chapter 1) and also with the minimum number of biarcs (chapter 2) for a given upper error bound epsilon. 
In the second part of the thesis we move on the 3-dimensional space and to polytope approximations. 
We initiate the study of this problem by considering convex surfaces only, for simplicity, before moving on to non convex surfaces. 
A first natural step to higher order approximation of convex polytopes is the approximation with spheres or spherical patches. 
In chapter 3 we can show that deciding the existence of an approximation of a convex polytope with a given upper error bound epsilon and not more than a given number of spherical patches is NP-hard. 
In chapter 4 we present a new technique for constructing a curved surfaces based on inscribed polytopes resulting in a convex surface consisting of spherical patches. 
To tackle the approximation problem for non-convex polytopes we pick up the idea of an incremental approximation algorithm introduced in chapter 4. This induces the problem of finding a simple and topological correct start polytope, the seed polytope, for non-convex polytopes. 
In chapter 5 we describe how to construct for a surface in 3D space, given by sample points S, a coarse approximating polytope P that uses a subset of the points as vertices and preserves the topology. In contrast to surface reconstruction we do not use all sample points, but we try to use as few points as possible. 
We also give a short introduction how to use the results for the seed polytope generation for surface reconstruction.

Schymura, Daria: Probabilistic Matching of Solid Shapes in Arbitrary Dimension

Betreuer: Prof. Dr. Helmut Alt
Abschluss: PhD
Abgabedatum: 18.11.2011
Homepage des Autors:

Determining the similarity between objects is a fundamental problem in computer vision and pattern recognition, but also in other fields of computer science. This thesis concentrates on the matching problem, which has received a lot of attention in Computational Geometry.

Given a class of shapes S, a set of transformations T, mapping shapes onto shapes, and a distance measure d on S, the matching problem with respect to S, T, and d is defined as follows: Given two shapes A, B in S, compute a transformation t* in T that minimizes d(t*(A),B).

We consider solid shapes, i.e., full-dimensional shapes, in arbitrary dimension and assume that they are given by an oracle that generates uniformly distributed random points from the shapes. This is a very rich class of shapes that contains the class of finite unions of simplices as a subclass. We study matching under translations and rigid motions (translation and rotation). Throughout this work, the volume of the symmetric difference is used as distance measure for the matching problem. Maximizing the volume of the overlap is equivalent to minimizing the volume of the symmetric difference under translations and rigid motions.

We study a probabilistic approach to the shape matching problem. The main idea is quite simple. Given two shapes A and B, repeat the following random experiment very often: Select random point samples of appropriate size from each shape and compute a transformation that maps the point sample of one shape to the sample of the other shape. Store this transformation. In each step, we extend the collection of random transformations by one. Clusters in the transformation space indicate transformations that map large parts of the shapes onto each other. We determine a densest cluster and output its center.

This thesis describes probabilistic algorithms for matching solid shapes in arbitrary dimension under translations and rigid motions. The algorithms are a priori heuristics. The main focus is on analyzing them and on proving that they maximize the volume of overlap approximately by solving the following instance of the matching problem. Given two solid shapes A and B, an error tolerance eps in (0,1), and an allowed probability of failure p in (0,1), the problem is to compute a transformation t* such that with probability at least 1-p, we have that the volume of the intersection of t*(A) and B is at least as large as the volume of the intersection of t(A) and B minus an error of e times the volume of A for all transformations t, in particular for transformations maximizing the volume of overlap.

The approach is mainly of theoretical interest. Still, the algorithms are so simple that they can easily be implemented, which we show by giving experimental results of a test implementation for 2-dimensional shapes.

Wolff, Alexander: Beschriftung von Landkarten

Betreuer: Frank Wagner
Abschluss: PhD
Abgabedatum: 28.05.1999
Homepage des Autors:

Atariah, Dror: Observations and analysis in Configuration Spaces

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 09.05.2014
Homepage des Autors:

This work covers three topics that can all be linked to the
celebrated motion planning problem of planar robots. When
considering a planar robot, which is represented by a convex
polygon, it is natural to study an associated configuration
space, where each point corresponds to a unique placement of the
robot in its physical world. Obstacles in the world of the robot
are represented as three-dimensional solids in the associated
configuration space. The union of these solids is the so-called
forbidden space. Each point of the forbidden space corresponds
to a placement of the robot such that its interior intersects one
or more obstacles. The remainder of the configuration space is
the so-called free space. Of course, the boundary between the
free and forbidden spaces is of tremendous interest.

The first part of the dissertation provides a simple and
geometrically motivated parameterization of the surfaces that
constitute the mentioned boundary. Using this parameterization,
it is easy to produce various visualizations of the boundary.
Furthermore, standard computations that utilize the
parameterization yield deep understanding of the differential
geometry of the boundary. Clearly, these are two achievements
that contribute to the general study of the motion planning
problem. In particular, it is also shown that the elements of
the boundary corresponding to contacts between an edge of the
robot and a vertex of an obstacle are non-developable ruled
surfaces --- thus having a negative Gaussian curvature.

The negatively curved surfaces that emerge as portions of the
discussed boundary, motivated a search for an optimal
triangulation of simpler negatively curved surfaces.
Specifically, hyperbolic paraboloids (also know as saddles) are
considered. The second part of the dissertation provides both
interpolating and non-interpolating triangulations of general
saddle surfaces. The optimality of the yielded triangulation
used for the approximation is twofold:(i) it maintains a fixed
error bound, and (ii) minimizes the number of triangles needed to
cover a given portion of the saddle.

Finally, the third part of the dissertation provides a detailed
account of a contribution to the 2D Arrangements package of
the "Computational Geometry Algorithms Library". This part of
the work considers the computation of arrangements of bounded
piecewise linear curves (polylines) in the plane. More
precisely, the initial package code, as shipped with version 4.3,
was considerably improved in two senses. First, the computations
of arrangements of polylines using the modified code improves
execution time by about 5% (on average). Secondly, the modified
code is much more generic and suitable for further
generalizations. As a result, the improved code was accepted for
integration into the library and is scheduled to be shipped with
its next release. Together, the achievements presented in the
dissertation can contribute to the ongoing study of the motion
planning problem, as well as to numerous more general purposes.

Schmidt, Jens M.: Certifying Graph Algorithms and 3-Connectedness

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 18.03.2011
Homepage des Autors:

The class of 3-connected (i.e., 3-vertex-connected) graphs has been studied intensively for many reasons in the past 50 years. One algorithmic reason is that graph problems can often be reduced to handle only 3-connected graphs; applications include problems in graph drawing, problems related to planarity and online problems on planar graphs.

It is possible to test a graph on being 3-connected in linear time. However, the linear-time algorithms known are complicated and difficult to implement. For that reason, it is essential to check implementations of these algorithms to be correct. A way to check the correctness of an algorithm for every instance is to make it certifying, i. e., to enhance its output by an easy-to-verify certificate of correctness for that output. However, surprisingly few work has been devoted to find certifying algorithms that test 3-connectivity; in fact, the currently fastest algorithms need quadratic time.

Two classic results in graph theory due to Barnette, Grünbaum and Tutte show that 3-connected graphs can be characterized by the existence of certain inductively defined constructions. We give new variants of these constructions, relate these to already existing ones and show how they can be exploited algorithmically. Our main result is a linear-time certifying algorithm for testing 3-connectivity, which is based on these constructions. This yields also simple certifying algorithms in linear time for 2-connectivity, 2-edge-connectivity and 3-edge-connectivity. We conclude this thesis by a structural result that shows that one of the constructions is preserved when being applied to depth-first trees of the graph only.

Miltzow, Tillmann: Geometric and Combinatorial Problems of Matching and Partitioning in Theoretical Computer Science

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 05.06.2015
Homepage des Autors:

In this thesis, we consider four problems in theoretical Computer Science:

1.Disjoint Unit Disks in the plane and disjoint unit balls in space can be separated by hyperplanes. Doing his, we try to minimize the number of intersections between the hyperplane and the balls. Although the first papers appeared in the 80s of the last century, up to now there existed no optimal deterministic algorithm to find such a hyperplane. We present an exact algorithm in the plane and approximate algorithm in higher dimensions. (This part is joint work with Michael Hoffman and Vincent Kusters.)

2. Tron is a computer game from the 80s of the last century, which was studied at first by Bodlaender and Kloks on abstract graphs. We answer questions that remained open regarding algorithmic complexity and study the minimal and maximal outcome of the game under the assumptions that both players play optimally. We consider these questions in different game modi.

3. Pareto Optimal Matchings are a concept used in economics and game theory. They describe certain stabil situations similar to Nash-equlibria. They also play a role in some algorithmic questions.
We give upper bounds on the number of Pareto Optimal Matchings under simple conditions. Further, we investigate a series of related algorithmic questions. (This part is joint work with Andrei Asinowski and Balázs Keszegh.)

4. Geometric Matchings are non-crossing segments connecting a set of points in the plane. Although it is very simple to find colored point sets admitting exactly one geometric matching, up to now there has been no characterization of such point sets in general. We give several simple and elegant characterization and answer further questions to this class of points. (This part is joint work with Andrei Asinowski und Günter Rote)

Scharf, Ludmila: Probabilistic Matching of Planar Shapes

Betreuer: Prof. Dr. Helmut Alt
Abschluss: PhD
Abgabedatum: 05.06.2009
Homepage des Autors:

In this thesis we study a probabilistic approach for the shape matching problem. The studied approach is based on an intuitive definition of the shape matching task: Given two shapes A and B find that transformation within the class of allowable transformations which maps B to A in a best possible way. A mapping is considered to be good if large parts of the two shapes coincide within some tolerance distance delta. 

We assume that the shapes are modeled by finite sets of rectifiable curves in the plane. As possible classes of transformations we consider sub-classes of affine transformations: translations, rigid motions (translations and rotations), similarity maps (translation, rotation, and scaling), homotheties (translation and scaling), shear transformations, and affine maps. 

The major idea of the probabilistic algorithm is to take random samples of points from both shapes and give a "vote" for that transformation matching one sample with the other. If that experiment is repeated frequently, we obtain by the votes a certain probability distribution in the space of transformations. Maxima of this distribution indicate which transformations give the best match between the two figures. The matching step of the algorithm is, therefore, a voting scheme. 

In this thesis we analyze the similarity measure underlying the algorithm and give rigorous bounds on the runtime (number of experiments) necessary to obtain the optimal match within a certain approximation factor with a prespecified probability. We perform a generic analysis of the algorithm for arbitrary transformation classes, as well as an in-depth analysis for different sub-classes of affine transformations. It is also shown that the density function of the vote distribution is exactly the normalized generalized Radon transform in the cases of translations and rigid motions. 

We consider the theoretical analysis as the major contribution of this thesis, since it leads to a better understanding of this kind of heuristic techniques.

Scherfenberg, Marc: Searching point patterns, matching imprecise point patterns, and inducing polygons

Betreuer: Christian Knauer
Abschluss: PhD
Abgabedatum: 27.01.2012
Homepage des Autors:

Die Berechnung der Ähnlichkeit zweier geometrischer Figuren ist eine der fundamentalen Aufgaben der algorithmischen Geometrie. In der Regel wird die Ähnlichkeit durch eine reellwertige Abstandsfunktion gemessen. Im Laufe der Zeit sind einige solcher Abstandsfunktionen entwickelt worden, die auf unterschiedlichen Klassen von geometrischen Figuren definiert sind und jeweils verschiedene Aspekte der Ähnlichkeit bewerten. Die bisherigen Algorithmen zur Berechnung dieser Abstandsfunktionen setzen voraus, dass die geometrischen Figuren exakt gegeben sind. Diese Exaktheit ist in den meisten Anwendungen allerdings nicht gegeben, da die Eingabedaten durch Messungen gewonnen werden, die eine bestimmte Ungenauigkeit aufweisen. In vielen Fällen ist die mit der Messung verbundene Ungenauigkeit jedoch bekannt. Zwei wichtige Abstandsmaße zur Bestimmung der Ähnlichkeit geometrischer Formen sind der auf Punktmengen definierte gerichtete und ungerichtete Hausdorff-Abstand sowie der diskrete Fréchet-Abstand, der auf Punktfolgen definiert ist.

Im Rahmen der Dissertation wurden Algorithmen zur Berechnung dieser Abstandsmaße entwickelt, die die bekannten Ungenauigkeiten der Eingabedaten berücksichtigen, um das Supremum und Infimum des Ähnlichkeitswertes unter den im Rahmen der Ungenauigkeiten möglichen Ausprägungen der geometrischen Figuren exakt zu berechnen. Des Weiteren wurde bewiesen, dass in einigen Fällen die Berechnung des Infimums sowie seine Approximation mit einem bestimmten Approximationsfaktor NP-schwer ist. Für einige dieser Fälle wurden Approximationsalgorithmen entwickelt, die zum Teil die theoretisch bestmögliche Approximation erreichen. Die genannten Ergebnisse stellen einen Schwerpunkt der Dissertation dar und werden in diesem Vortrag vorgestellt.

Des Weiteren wird in dem Vortrag kurz auf die weiteren in der Dissertation enthaltenen Ergebnisse eingegangen. Hierzu zählen unter anderem Suchreduktionen, die eine nicht heuristische Nächste-Nachbar-Suche von Punktmustern ermöglichen, sowie die Beantwortung der zuvor offenen Frage, ob jedes Arrangement von n Geraden durch ein einfaches n-Gon induziert werden kann. In der Literatur gibt es viele nicht heuristische Suchstrukturen für einzelne Punkte sowie Suchheuristiken, um geometrische Muster zu suchen, die komplexer als ein einzelner Punkt sind. Die in der Dissertation vorgestellten Suchreduktionen ermöglichen die Reduktion von Suchen nach Punktmustern unter Abstandsmaßen, die nicht die Eigenschaften einer Metrik aufweisen müssen, auf Nächste-Nachbar-Suchen einzelner Punkte durch Einbettung in einen metrischen Raum. Für einige Abstandsmaße kann die benutzte Einbettung so erweitert werden, dass die Suchreduktion zu translationsinvarianten Suchen führt. Für die meisten betrachteten Abstandsmaße ermöglicht die Suchreduktion erstmals eine nicht heuristische Nächste-Nachbar-Suche unter diesem Abstandsmaß.

Werner, Daniel: On the computational complexity of some problems from discrete geometry in higher dimensions

Betreuer: Prof. Dr. Günter Rote
Abschluss: PhD
Abgabedatum: 18.01.2013
Homepage des Autors:

We will investigate computational aspects of several problems from discrete geometry in higher dimensions. In the plane, many of them are well understood and can be solved efficiently, but as the dimension increases, most of them seem to be considerably harder to solve. In this thesis, we make progress towards explaining this phenomenon by showing computational hardness for some of these problems. To this end, we also make use of parameterized complexity theory in order to show stronger relative lower bounds than those possible with classical complexity theory only. For one of the problems, we moreover develop several approximation algorithms. In the process, we pay particular attention to the exact dependence of the running time on the dimension.

We will use and develop different techniques for showing hardness of the problems in unbounded dimension. These include the technique of deconstructing the space into orthogonal planes, into which gadgets are placed. Using this technique, we are able to show a relative lower bound of n^Omega(d) for several problems related to testing the discrepancy of a point set and verifying epsilon-nets.

We then present a more natural reduction technique that reduces from the d-Sum problem to show relative lower bounds for many problems arising from theorems in combinatorial geometry. These include computing minimal Helly sets, certain decision versions of the ham-sandwich problem, and computing the Tverberg depth of a point set.

We then turn to computing a maximum size subset of points in convex position. While all the previous problems admit straightforward n^O(\poly(d)) algorithms in d dimensions, here we are able to show that the problem already becomes hard in 3 dimensions. This shows a strong dichotomy between a low and a higher dimensional case, because in the plane the problem was known to be solvable in polynomial time.

As a positive result, we then consider the problem of computing a point of high Tverberg depth in d dimensions. We present a novel lifting approach that allows us to compute deep points for a point set in high dimension from deep points of its projection to some lower dimensional space. The approach is very generic, and we show how to combine it with other known methods in order to get even better algorithms.

Finally, we give a short outlook and suggest further open problems on the subject.

Dieckmann, Claudia: Symmetry Detection and Approximation

Betreuer: Prof. Dr. Helmut Alt
Abschluss: PhD
Abgabedatum: 07.12.2012
Homepage des Autors:

In this thesis, we will present algorithms to solve the following two closely related problems:

The first problem we will consider is to detect the symmetry group of a two-dimensional object even in the case where its representation is distorted by noise. 

We will derive and analyze algorithms following different approaches in order to solve this problem.
One approach is to use the discrete Fourier transform in order to detect the symmetry group of the object. Here we assume the object to be represented by a gray-level image. The discrete Fourier transform is helpful in finding periodic structures in an input since it decomposes a signal into its fundamental frequencies. We will use this property in order to derive an algorithm which applies the discrete Fourier transform to the gray-level image and uses the result in order to determine the symmetry group of the represented object. The algorithm can be used for detecting finite as well as infinite symmetry groups.

Besides we will investigate a second method based on the probabilistic approach which is also used for solving shape matching problems. For the algorithms based on probabilistic methods we assume the object to be represented by a set of points, a set of polygonal curves or a set of parametrized curves.

The basic idea of these algorithms is to randomly choose two points out of the input set representing the object and compute the transformation (rotation or reflection) mapping the one point to the other. A vote will be generated for the computed transformation in transformation space. This procedure is repeated sufficiently often until dominant clusters in transformation space arise. 
The number of clusters with large numbers of votes refers to the number of symmetries of the object and thus can be used in order to compute the symmetry group of the object represented by the input set.

Both approaches described above result in algorithms which are robust against noise. Thus they derive the correct answers even if the symmetries of the object got lost during the process of computing its representation by a gray-level image or a set of geometric objects, respectively.

After detecting the symmetry group of an object represented by a point set which might be distorted by noise another interesting problem is to find a point set which is symmetric with respect to the symmetries in the detected symmetry group and which is a close approximation of the input point set. The aim is to restore the symmetries which might have got lost during the process of representing the object in such a way that it can be processed by a computer. We assume a symmetric point set to be a close approximation of the input point set if each point of the (non-symmetric) input point set lies in the epsilon-neighborhood of a point in the symmetric point set. We ask this correspondence to be a bijection. This problem is called the epsilon-Symmetry Detection Problem (epsilon-SD problem). 

The epsilon-SD problem was already studied by S. Iwanowski and he proved it to be NP-complete in general. For some restricted versions of the epsilon-SD problem Iwanowski proved the decision problem to be in P. For those we will present polynomial time algorithms solving the corresponding optimization problems. Additionally we will present polynomial time algorithms for some restricted versions which were not considered until now. One possible restriction is to only allow point sets which are well-separated. Iwanowski proved the epsilon-SD problem to be in P in the case where no two points have a distance smaller than 8*epsilon and proved it to be NP-complete in the case where the point set is at most epsilon/2-disjoint. We will improve this result by developing polynomial time algorithms for 4(1+delta)epsilon-disjoint point sets for each delta>0.

Gärtner, Bernd: Randomized Optimization by Simplex-Type Methods

Betreuer: Emo Welzl
Abschluss: PhD
Abgabedatum: 01.12.1995
Homepage des Autors:

Der bekannteste Optimierungsalgorithmus für das bekannteste Optimierungsproblem ist der Simplex-Algorithmus für Lineares Programmieren.

(LP) maximiere eine lineare Funktion in d Variablen, wobei die Variablen n lineare Ungleichungen erfüllen müssen.

Für die algorithmische Geometrie sind besonders Probleme in kleinen oder sogar konstanten Dimensionen (d = 2,3) interessant, und man interessiert sich deshalb vornehmlich für die Laufzeit von Verfahren als Funktion von n. Für den simplex-Algorithmus (in einer speziellen Variante) gab es dabei seit 1992 sehr interessante Entwicklungen. Es konnte gezeigt werden, daß er optimale Laufzeit O(n) hat (allerdings mit exponentieller Abhängigkeit von d). Wichtiger noch, es stellte sich heraus, daß mit einem einheitlichen, simplex-artigen Algorithmus und gleichen

Zeitschranken auch viele andere Probleme gelöst werden können, für die Linearzeit-Lösungen teilweise nicht bekannt oder nur kompliziert realisierbar waren. Zwei Beispiele sind

(MINIBALL): Finde die kleinste Kugel, die n gegebene Punkte im d-dimensionalen Raum enthält.

(POLYDIST): Gegeben zwei Polytope mit zusammen n Ecken im d-dimensionalen Raum, finde den kürzesten Abstand zwischen den Polytopen.

Zusätzlich konnte das Verhalten in d verbessert werden - es ist `nur' noch exponentiell in sqrt(d).

Die Dissertation beschäftigt sich mit mehreren Aspekten dieser Entwicklung.

  1. Welche abstrakten Klassen von Problemen lassen sich simplex-artig lösen? Es zeigt sich, daß man erstaunlich wenig Eigenschaften benötigt, um effiziente Verfahren zu erhalten.
  2. Welche Rolle spielt Randomisierung? Das oben erwähnte O(n) Verfahren verwendet Münzwürfe. Das hat den Effekt, daß man es nicht `hereinlegen' und zu sehr schlechter, exponentieller, Laufzeit in d verführen kann, wie das für viele deterministische Verfahren geht.
  3. Kann man randomisierte Verfahren im schwächeren Sinne vielleicht doch hereinlegen? Mit anderen Worten, gibt es Eingaben, so daß die Algorithmen nicht polynomiell in n und d sind? Das ist nicht bekannt, und die Analyse ist wesentlich schwieriger als im deterministischen Fall. Die Frage ist eine der heißen offenen Probleme auf diesem Forschungsgebiet.

Seiferth, Paul: Disk Intersection Graphs: Models, Data Structures, and Algorithms

Betreuer: Prof. Dr. Wolfgang Mulzer
Abschluss: PhD
Abgabedatum: 19.08.2016
Homepage des Autors:

Let P be a set of n point sites in the plane. The unit disk graph UD(P) on P has vertex set P and an edge between two sites p,q of P if and only if p and q have Euclidean distance |pq| <= 1. If we interpret P as centers of disks with diameter 1, then UD(P) is the intersection graph of these disks, i.e., two sites p and q form an edge if and only if their corresponding unit disks intersect. Two natural generalizations of unit disk graphs appear when we assign to each point p of P an associated radius r_p > 0. The 
first one is the disk graph D(P), where we put an edge between p and q if and only if |pq| <= r_p + r_q, meaning that the disks with centers p and q and radii r_p and r_q intersect. The second one yields a directed graph on P, called the transmission graph of P. We obtain it by putting a directed edge from p to q if and only if |pq| <= r_p, meaning that q lies in the disk with center p and radius r_p. For disk and transmission graphs we define the radius ratio Psi to be the ratio of the largest and the smallest radius that is assigned to a site in P. It turns out that the radius ratio is an important measure of the complexity of the graphs and some of our results will depend on it. 

For these three classes of disk intersection graphs we present data structures and algorithms that solve four types of graph theoretic problems: dynamic connectivity, routing, spanner construction, and reachability oracles; see below for details. For disk and unit disk graphs, we improve upon the currently best known results, while the problems we consider for transmission graphs abstain non-trivial solutions so far. 


Dynamic Connectivity:
First, we present a data structure that maintains the connected components of a unit disk graph UD(P) when P changes dynamically. It takes O(log^2 n) amortized time to insert or delete a site in P and O(log(n)/loglog(n)) worst-case time to 
determine if two sites are in the same connected component. Here, n is the maximum size of P at any time. A simple variant improves the amortized update time to O(log(n)loglog(n)) at the cost of a slightly increased worst-case query time of O(log(n)).

Using more advanced data structures, we can extend our approach to disk graphs. While the worst-case query time remains
O(log(n)/loglog(n)), an update now requires O(Psi^2 2^(alpha(n))log^(10)(n)) amortized expected time, where Psi is the radius ratio of the disk graph and alpha(n) is the inverse Ackermann function.


Routing: 
As the second problem, we consider routing in unit disk graphs. A routing scheme R for UD(P) assigns to each site s of P a 
label l(s) and a routing table rho(s). For any two sites s and t of P, the scheme R must be able to route a packet from s to t in the following way: given a current site r (initially, r = s), a header h (initially empty), and the target label l(t), the scheme R may consult the current routing table rho(r) to compute a new site r' and a new header h', where r' is a neighbor of r in UD(P). The packet is then routed to r', and the process is repeated until the packet reaches t. The resulting sequence of sites is called the routing path. The stretch of R is the maximum ratio of the (Euclidean) length of the routing path produced by R and the shortest path in UD(P), over all pairs of distinct sites in P.

For any given eps > 0, we show how to construct a routing scheme for UD(P) with stretch 1+eps using labels of O(log(n)) bits and routing tables of O(eps^(-5)log^2(n)log^2(D)) bits, where D is the (Euclidean) diameter of UD(P). The header size is O(log(n)log(D)) bits. 


Spanners:
Next, we construct sparse approximations of transmission and disk graphs. Let G be a transmission graph. A t-spanner for G is 
a subgraph H of G with vertex set P so that for any two sites p and q of P, we have d_H(p, q) <= td_G(p, q), where d_H and
d_G denote the shortest path distance in H and G (with Euclidean edge lengths). We show how to compute a t-spanner for G with O(n) edges in O(n(log(n) + log(Psi))) time, where Psi is the radius ratio of P. Utilizing advanced data structures, we obtain a
construction that runs in O(n log^5(n)) time, independent of Psi. This construction can be adapted to disk graphs and gives a t-spanner for D(P) in expected time O(n2^(alpha(n))log^(10)(n)), where alpha(n) is the inverse Ackermann function.

As an application we show that our t-spanner can be used to find a BFS tree in a transmission or disk graph for any given start vertex in O(n log(n)) additional time.


Reachability Oracles:
Finally, we compute reachability oracles for transmission graphs. These are data structures that answer reachability queries: given two sites p and q, is there a directed path between them? The quality of an oracle is measured by the space S(n), the query time Q(n), and the preproccesing time. We present three reachability oracles whose quality depends on the radius ratio 
Psi: the first one works only for Psi < sqrt(3) and achieves Q(n) = O(1) with S(n) = O(n) and preprocessing time O(n log(n));
the second data structure gives Q(n) = O(Psi^3 sqrt(n)) and S(n) = O(Psi^3 n^(3/2)); the third data structure is randomized with
Q(n) = O(n^(2/3)(log(n) + log(Psi))) and S(n) = O(n^(5/3)(log(n) + log(Psi))) and answers queries correctly with high probability.

As a second application for our spanners, we employ them to achieve a fast preprocessing time for our reachability oracles.