Springe direkt zu Inhalt

Abgeschlossene Dissertationen

Bergold, Helena: Signotopes and Convex Drawings

Abschluss
PhD
Abgabedatum
19.01.2024
Homepage des Autors

Bui, Vuong: Growth of Bilinear Maps

Abschluss
PhD
Abgabedatum
18.12.2023
Homepage des Autors

Seiferth, Nadja: Algorithmic Aspects of Packing Problems

Abschluss
PhD
Abgabedatum
29.11.2021
Homepage des Autors

In this thesis, we study different kinds of packing problems. A packing is an arrangement of geometric objects in Euclidean space of any fixed dimension such that the interiors of the packed objects are pairwise disjoint. The aim is to optimize some goal function such as the area of the smallest convex container enclosing the objects or the number of objects that can be packed into a given container. Here, we study problems only in two or three dimensions.

The thesis is divided into two parts. In the first part, we study packing equal objects. In most cases, we are given a container and want to compute the maximum number of copies of an object that can be packed into it. We obtain the following results:

The container is a fat parallelogram or triangle: Given a container parallelogram by a base edge and an additional point, there is a polynomial-time approximation scheme (PTAS) to compute the maximum number of copies of an object that can be packed into the container under translation or rigid motions if the following conditions hold:

  • The inner angles of the container are bounded from below by a constant, i.e., the container is fat. 
  • Either the object to be packed is part of the problem description and not part of the algorithm input or the area of the object to be packed is bounded from below by a constant times its squared diameter.

The same holds for a triangular container given by its side lengths.

The container is an arbitrary triangle: If the container is an arbitrary non-fat triangle given by its side lengths, there can be a constant factor approximation for the maximum number of unit disks that can be packed into it obtained in polynomial time.

The objects to be packed are unit disks or unit spheres: Given a convex container, such that its area is bounded from below by a constant times its squared diameter, there is a PTAS for computing the maximum number of unit disks that can be packed into it. The same holds for any polygonal container, if its description is not part of the input but part of the problem description and the input is just a scaling factor for it. We also obtain a PTAS for a similar problem in three dimensions: Given a sphere by its radius, what is the maximum number of unit spheres that can be packed into it? For this result, we revisit parts of the proof of Hales to obtain the following result: There exists a constant c such that for every packing of infinitely many unit spheres into three dimensional space, the density inside a sphere of radius r is bounded from above by π*√(18)+c/r. It is crucial, that c is, in contrast to the proof of Hales et al., independent of the packing.

Most variants of the problems just mentioned have in common that their input is very concise, i.e., it consists only of a constant number of numbers. Therefore, it seems hard to prove any kind of hardness or give algorithms and indeed, neither one of the problems just mentioned is known to be NP-hard nor to be in NP. To our knowledge, these are the first results of the approximability for this kind of problems with mentioned concise input and therefore, the first about the complexity of this kind of problems.

The next result differs from the previously studied problems in the sense that the input is larger. It is the last result in the first part of the thesis.

Packing unit disks in 3D under translation: In three dimensions, there is to our knowledge no approximation algorithm known for packing objects different from axis-parallel boxes under translation whereas in two dimensions, there is a constant factor approximation for packing convex polygons under translations minimizing the area of the smallest axis-parallel rectangular or convex container. We make a step towards packing more general objects in three dimensions by giving a constant factor approximation for packing unit disks under translation into an axis-parallel box with minimum volume. Furthermore, we show that there can not be a convex container of bounded volume such that all possible unit disks can be packed into it. This is in contrast to the equivalent two dimensional problem where all possible segments with length one can be packed for example into half a circle of radius two.

The second part of the thesis studies the complexity of two basic two-dimensional packing problems. Minimum area container packing: It is known that there can not be a PTAS for Strippacking unless P=NP but there exists a constant factor approximation algorithm. For packing convex polygons under translation into minimum area convex or axis-parallel rectangular containers, there is a constant factor approximation. Unlike for Strippacking, it is not known whether a PTAS can exist or not. We narrow this gap by showing that there can not exist a FPTAS unless P=NP. The same holds when we allow rotations by 90° additionally to translations.

Line segment packing: Kim and Miltzow showed that maximizing the number of line segments from a given set of line segments that can be packed into a simple polygon is NP-hard. We strengthen this result by showing that packing the maximum number of line segments into a square is NP-hard, albeit we allow parallel line segments, which is not necessary for Kim and Miltzow's result.

Klost, Katharina: Geometric Graphs: Reachability, Long Trees and Short Cycles

Abschluss
PhD
Abgabedatum
21.09.2021
Homepage des Autors

Given a set S of n point sites, a geometric graph is a graph with S as its vertex set whose edges are drawn as line segments connecting the sites. The edges that are present between these sites can be defined by geometric properties of the site set. When talking about weighted edges, the weight of the edge connecting s and t is the Euclidean distance between s and t. One such class of graphs are spanning trees of the point set. That is, an acyclic graph defined on S such that all sites lie in the same connected component.

To define the second broad class of geometric graphs considered in this thesis, we extend each site with a radius. In this setting the sites can also be interpreted as disks, by using the site as the center. We consider two kinds of geometrically defined graphs on these extended sites. The first are disk graphs D(S). In a disk graph two sites are connected with an edge if and only if their corresponding disks intersect. The second type of graphs are transmission graphs T(S). These can be seen as a directed version of disk graphs. In a transmission graph a site s has a directed edge to a site t, if and only if t is contained in the disk defined by s.

We consider three main types of problems on these graphs:

TRIANGLES AND GIRTH IN DISK GRAPHS AND TRANSMISSION GRAPHS We give algorithms for finding a (shortest) triangle and more generally for finding short cycles. In general graphs, finding substantially faster algorithms than the naive approach is notoriously hard. However, better algorithms for special graph classes such as planar graphs exist in the literature. In this thesis, we obtain similarly efficient results for disk graphs and for transmission graphs. More precisely, we show that in a transmission graph a triangle can be detected and a shortest such triangle can be found in O(n log n) expected time. Furthermore, the weighted girth of a disk graph can be found within the same time bound. We also show that cycle with k edges in a transmission graph can be identified in O(n log n)+ n2^{O(k)} expected time. For the results on transmission graphs, we develop batched range query data structures that are of independent interest.

DYNAMIC DISK GRAPH CONNECTIVITY We consider the problem of designing data structures that maintain a disk graph under the deletion of sites, while allowing interleaved connectivity queries. First we consider the setting, where each site has a radius in the range [1,Ψ] for some fixed value Ψ. In this scenario, we give a data structure that supports m deletions in O( (n log^5 n + m log^9 n) λ_6(log n) + n logΨ log^4 n) overall expected time, with O((log n)/(log log n)) query time, where λ_6(n) is the length of a Davenport-Schinzel sequence of order 6. If we consider disk graphs without bounding the maximal allowed radius, we obtain a data structure that supports m deletions in O( (n log^6 n + m log^{10}n) λ_6(\log n) ) overall expected time, with the same O((log n)/(log log n)) time bound for queries.

LONG PLANE TREES We also consider spanning trees on the site set S. To be precise, we aim to find a plane spanning tree Topt of S that maximizes the total edge length |Topt|. Despite more than two decades of research, it remains open if this problem is NP-hard.

We take two approaches to the problem. The first is to follow the venue of approximation algorithms which was also the focus of previous research. We describe a polynomial-time algorithm to construct a plane tree Talg with diameter at most four and |Talg| >= 0.546 |Topt|, where |Topt| is the total edge length of an optimal plane spanning tree. This constitutes a significant improvement over the state of the art. Second, we consider exact polynomial time algorithms for trees of bounded diameter. We give an O(n^4) time algorithm for finding an exact solution for trees of diameter at most three and then extend this algorithm to special trees of diameter at most four.

Willert, Max: Routing and Stabbing

Abschluss
PhD
Abgabedatum
24.03.2021
Homepage des Autors

ROUTING. Let G be a simple, connected, undirected graph. We consider routing with preprocessing in G. In a preprocessing step, each vertex of G receives a label and a routing table. Then, we must be able to route a packet between any two vertices s and t of G, where each step may use only the label of the target node t, the routing table of the current node and the packet header. This problem has been studied extensively for general graphs, where efficient routing schemes with polylogarithmic routing tables have turned out to be impossible. Thus, special graph classes are of interest.

Let P be an x-monotone orthogonal polygon with n vertices. We call P a simple histogram if its upper boundary is a single edge; and a double histogram if it has a horizontal chord from the left boundary to the right boundary. Two points p and q in P are co-visible if and only if the (axis-parallel) rectangle spanned by p and q completely lies in P. In the r-visibility graph Vis(P) of P, we connect two vertices of P with a unit weighted edge if and only if they are co-visible. We present a routing scheme for visibility graphs of simple and of double histograms that have label size log n and table size O(log n deg(v)) for each vertex v of P, where deg(v) is the degree of v in Vis(P). In simple histograms we can route along a shortest path and need no additional header, whereas in double histograms we need headers of size log n and we can route on a path that has twice the length of an optimal path. The preprocessing time is in both cases O(m), where m is the number of edges in Vis(P).

Let V be a set of n sites in the plane. The unit disk graph DG(V) of V is the graph with vertex set V where two sites v and w are adjacent if and only if their Euclidean distance is at most 1. The edge weights correspond to the Euclidean distance of its endpoints. Moreover, we use D to denote the diameter of DG(V). We show that for any given eps>0, we can construct a routing scheme for DG(V) that achieves stretch 1+eps, has label size O(eps^(-1)log D log^3n/loglog n), table size eps^(-O(eps^(-2)))log^3n(1+log D/loglog n) and the header needs at most O(log^2n/loglog n) bits. The preprocessing time is O(eps^(-1)n^2 log^2 n).

STABBING. Suppose we are given a set D of n pairwise intersecting disks in the plane. A planar point set P stabs D if and only if each disk in D contains at least one point from P. We present a deterministic algorithm that takes O(n) time to find five points that stab D. This provides a simple---albeit slightly weaker---algorithmic version of a classical result by Danzer that such a set D can always be stabbed by four points. Furthermore, we give a simple example of 13 pairwise intersecting disks that cannot be stabbed by three points.

Klemz, Boris: Facets of Planar Graph Drawing

Abschluss
PhD
Abgabedatum
30.04.2020
Homepage des Autors

This thesis makes a contribution to the field of Graph Drawing, with a focus on the planarity drawing convention. The following three problems are considered.

(1) Ordered Level Planarity:

We introduce and study the problem Ordered Level Planarity which asks for a planar drawing of a graph such that vertices are placed at prescribed positions in the plane and such that every edge is realized as a y-monotone curve. This can be interpreted as a variant of Level Planarity in which the vertices on each level appear in a prescribed total order. We establish a complexity dichotomy with respect to both the maximum degree and the level-width, that is, the maximum number of vertices that share a level. Our study of Ordered Level Planarity is motivated by connections to several other graph drawing problems. With reductions from Ordered Level Planarity, we show NP-hardness of multiple problems whose complexity was previously open, and strengthen several previous hardness results. In particular, our reduction to Clustered Level Planarity generates instances with only two nontrivial clusters. This answers a question posed by Angelini, Da Lozzo, Di Battista, Frati, and Roselli [2015]. We settle the complexity of the Bi-Monotonicity problem, which was proposed by Fulek, Pelsmajer, Schaefer, and Stefankovic [2013]. We also present a reduction to Manhattan Geodesic Planarity, showing that a previously [2009] claimed polynomial time algorithm is incorrect unless P=NP.

(2) Two-page book embeddings of triconnected planar graphs:

We show that every triconnected planar graph of maximum degree five is a subgraph of a Hamiltonian planar graph or, equivalently, it admits a two-page book embedding. In fact, our result is more general: we only require vertices of separating 3-cycles to have degree at most five, all other vertices may have arbitrary degree. This degree bound is tight: we describe a family of triconnected planar graphs that cannot be realized on two pages and where every vertex of a separating 3-cycle has degree at most six. Our results strengthen earlier work by Heath [1995] and by Bauernöppel [1987] and, independently, Bekos, Gronemann, and Raftopoulou [2016], who showed that planar graphs of maximum degree three and four, respectively, can always be realized on two pages. The proof is constructive and yields a quadratic time algorithm to realize the given graph on two pages.

(3) Convexity-increasing morphs:

We study the problem of convexifying drawings of planar graphs. Given any planar straight-line drawing of an internally 3-connected graph, we show how to morph the drawing to one with strictly convex faces while maintaining planarity at all times. Our morph is convexity-increasing, meaning that once an angle is convex, it remains convex. We give an efficient algorithm that constructs such a morph as a composition of a linear number of steps where each step either moves vertices along horizontal lines or moves vertices along vertical lines. Moreover, we show that a linear number of steps is worst-case optimal.

Banyassady, Bahareh: The Limited Workspace Model for Geometric Algorithms

Abschluss
PhD
Abgabedatum
26.04.2019
Homepage des Autors

Space usage has been a concern since the very early days of algorithm design. The increased availability of devices with limited memory or power supply – such as smartphones, drones, or small sensors – as well as the proliferation of new storage media for which write access is comparatively slow and may have negative effects on the lifetime – such as flash drives – have led to renewed interest in the subject. As a result, the design of algorithms for the limited workspace model has seen a significant rise in popularity in computational geometry over the last decade. In this setting, we typically have a large amount of data that needs to be processed. Although we may access the data in any way and as often as we like, write-access to the main storage is limited and/or slow. Thus, we opt to use only higher level memory for intermediate data (e.g., CPU registers). Since the application areas of the devices mentioned above – sensors, smartphones, and drones – often handle a large amount of geographic (i.e., geometric) data, the scenario becomes particularly interesting from the viewpoint of computational geometry. Motivated by these considerations, we investigate geometric problems in the limited workspace model. In this model the input of size n resides in read-only memory, an algorithm may use a workspace of size s = {1, . . . , n} to read and write the intermediate data during its execution, and it reports the output to a write-only stream. The goal is to design algorithms whose running time decreases as s increases, which provides a time-space trade-off. In this thesis, we consider three fundamental geometric problems, namely, computing different types of Voronoi diagrams of a planar point set, computing the Euclidean minimum spanning tree of a planar point set, and computing the k-visibility region of a point inside a polygonal domain. Using several innovative techniques, we either achieve the first time-space trade-offs for those problems or improve the previous results.

Grunert, Romain: Piecewise linear Morse theory

Abschluss
PhD
Abgabedatum
18.11.2016
Homepage des Autors
Sprache
eng

Classical Morse theory studies smooth manifolds by means of certain smooth real-valued maps defined on them, namely so called Morse functions, whose critical points are non-degenerate. For this study, topological changes of level sets are examined that occur when the level defined by a real function value varies. The results of the theory allow to infer global topological properties of the manifold from local changes at critical points. In this thesis, an analogous theory for combinatorial manifolds and piecewise linear maps defined on them is presented. The focus of the thesis lies on three topics:

Our first aim is a careful step by step transfer of basic results and their proofs based on the study of level sets from classical Morse theory to the piecewise linear setting. A valuable tool is a thorough investigation of how a polyhedral complex with a map linear on its cells induces in a natural way for each level set defined as preimage of a closed interval a polyhedral complex whose domain is that level set.

As another main topic of the thesis, we compare different characterisations of regular and non-degenerate critical points. Several definitions for such points turn out to be equivalent, but two characterisations suggested in the literature impose gradually weaker requirements on such points. In this context, we also present a method to convert a discrete Morse function on a combinatorial manifold into a piecewise linear Morse function whose critical points correspond to the critical cells of the discrete Morse function.

The third topic addresses isotopies between level sets as considered in classical Morse theory as well. At least for sufficiently generic piecewise linear maps on combinatorial manifolds we prove the existence of isotopies across all level sets belonging to an interval provided that the interval contains no critical values.

The thesis concludes with considerations concerning selected computational aspects. First, we discuss the decision problem whether a given point is regular or not. Second, the algorithmic construction of the isotopy between level sets is analysed in order to obtain an upper bound for the number of cells in the combinatorially equivalent complexes that represent the isotopy.

Stein, Yannik: The Colorful Carathéodory Problem and its Descendants

Abschluss
PhD
Abgabedatum
21.10.2016
Homepage des Autors

The colorful Carathéodory theorem is an existence theorem that implies several statements on convex intersection patterns such as Tverberg's theorem, the centerpoint theorem, the first selection lemma, and the colorful Kirchberger theorem. Interestingly, these proofs can be interpreted as polynomial-time reductions to ColorfulCarathéodory, the computational search problem that corresponds to the colorful Carathéodory theorem. We exploit this existing web of reductions by developing approximation algorithms and complexity bounds on ColorfulCarathéodory that also apply to its polynomial-time descendants.

Let C_1, ...., C_(d+1) \subset R^d be finite point sets such that 0 \in conv(C_i) for i \in [d+1]. Then, the colorful Carathéodory theorem asserts that we can choose one point from each set C_i such that the chosen points C contain the origin in their convex hull. ColorfulCarathéodory is then the computational problem of finding C. Since a solution always exists and since it can be verified in polynomial time, ColorfulCarathéodory is contained in total function NP (TFNP), the class of NP search problems that always admit a solution. We show that ColorfulCarathéodory belongs to the intersection of two important subclasses of TFNP: the complexity classes polynomial-time parity argument on directed graphs (PPAD) and polynomial-time local search (PLS). The formulation of ColorfulCarathéodory as a PPAD-problem is based on a new constructive proof of the colorful Caratheodory theorem that uses Sperner's lemma. Moreover, we show that already a slight change in the definition of ColorfulCarathéodory results in a PLS-complete problem.

In the second part, we present several constructive results. First, we consider an approximation version of ColorfulCarathéodory in which we are allowed to take more than one point from each set C_i. This notion of approximation has not been studied before and it is compatible with the polynomial-time reductions to ColorfulCarathéodory. For any fixed eps > 0, we can compute a set C with 0 \in conv(C) and at most \lceil eps d \rceil points from each C_i in d^O(eps^(-1) log eps^(-1)) time by repeatedly combining recursively computed approximations for lower-dimensional problem instances. Additionally, we consider a further notion of approximation in which we are given only k < d+1 sets C_i with 0 \in conv(C_i), and we want to find a set C with at most \lceil (d+1) / k \rceil points from each set C_i. The existence of C is a direct implication of the colorful Carathéodory theorem. Using linear programming techniques, we can solve the case k=2 in weakly polynomial time. Moreover, we show that ColorfulCarathéodory can be solved exactly in quasi-polynomial time when given poly(d) sets C_i that contain the origin in their convex hulls instead of only d+1. Finally, we consider the problem of computing the simplicial depth. The simplicial depth sigma_P(q) of a point q \in R^d w.r.t. a set P is the number of distinct d-simplices with vertices in P that contain q. If the dimension is constant, we show that sigma_P(q) can be (1+eps)-approximated w.h.p. in time ~O(n^(d/2+1)), where eps > 0 is an arbitrary constant. Furthermore, we show that the problem becomes #P-complete and W[1]-hard if the dimension is part of the input.

Seiferth, Paul: Disk Intersection Graphs: Models, Data Structures, and Algorithms

Abschluss
PhD
Abgabedatum
19.08.2016
Homepage des Autors

Let P be a set of n point sites in the plane. The unit disk graph UD(P) on P has vertex set P and an edge between two sites p,q of P if and only if p and q have Euclidean distance |pq| <= 1. If we interpret P as centers of disks with diameter 1, then UD(P) is the intersection graph of these disks, i.e., two sites p and q form an edge if and only if their corresponding unit disks intersect. Two natural generalizations of unit disk graphs appear when we assign to each point p of P an associated radius r_p > 0. The 
first one is the disk graph D(P), where we put an edge between p and q if and only if |pq| <= r_p + r_q, meaning that the disks with centers p and q and radii r_p and r_q intersect. The second one yields a directed graph on P, called the transmission graph of P. We obtain it by putting a directed edge from p to q if and only if |pq| <= r_p, meaning that q lies in the disk with center p and radius r_p. For disk and transmission graphs we define the radius ratio Psi to be the ratio of the largest and the smallest radius that is assigned to a site in P. It turns out that the radius ratio is an important measure of the complexity of the graphs and some of our results will depend on it. 

For these three classes of disk intersection graphs we present data structures and algorithms that solve four types of graph theoretic problems: dynamic connectivity, routing, spanner construction, and reachability oracles; see below for details. For disk and unit disk graphs, we improve upon the currently best known results, while the problems we consider for transmission graphs abstain non-trivial solutions so far. 


Dynamic Connectivity:
First, we present a data structure that maintains the connected components of a unit disk graph UD(P) when P changes dynamically. It takes O(log^2 n) amortized time to insert or delete a site in P and O(log(n)/loglog(n)) worst-case time to 
determine if two sites are in the same connected component. Here, n is the maximum size of P at any time. A simple variant improves the amortized update time to O(log(n)loglog(n)) at the cost of a slightly increased worst-case query time of O(log(n)).

Using more advanced data structures, we can extend our approach to disk graphs. While the worst-case query time remains
O(log(n)/loglog(n)), an update now requires O(Psi^2 2^(alpha(n))log^(10)(n)) amortized expected time, where Psi is the radius ratio of the disk graph and alpha(n) is the inverse Ackermann function.


Routing: 
As the second problem, we consider routing in unit disk graphs. A routing scheme R for UD(P) assigns to each site s of P a 
label l(s) and a routing table rho(s). For any two sites s and t of P, the scheme R must be able to route a packet from s to t in the following way: given a current site r (initially, r = s), a header h (initially empty), and the target label l(t), the scheme R may consult the current routing table rho(r) to compute a new site r' and a new header h', where r' is a neighbor of r in UD(P). The packet is then routed to r', and the process is repeated until the packet reaches t. The resulting sequence of sites is called the routing path. The stretch of R is the maximum ratio of the (Euclidean) length of the routing path produced by R and the shortest path in UD(P), over all pairs of distinct sites in P.

For any given eps > 0, we show how to construct a routing scheme for UD(P) with stretch 1+eps using labels of O(log(n)) bits and routing tables of O(eps^(-5)log^2(n)log^2(D)) bits, where D is the (Euclidean) diameter of UD(P). The header size is O(log(n)log(D)) bits. 


Spanners:
Next, we construct sparse approximations of transmission and disk graphs. Let G be a transmission graph. A t-spanner for G is 
a subgraph H of G with vertex set P so that for any two sites p and q of P, we have d_H(p, q) <= td_G(p, q), where d_H and
d_G denote the shortest path distance in H and G (with Euclidean edge lengths). We show how to compute a t-spanner for G with O(n) edges in O(n(log(n) + log(Psi))) time, where Psi is the radius ratio of P. Utilizing advanced data structures, we obtain a
construction that runs in O(n log^5(n)) time, independent of Psi. This construction can be adapted to disk graphs and gives a t-spanner for D(P) in expected time O(n2^(alpha(n))log^(10)(n)), where alpha(n) is the inverse Ackermann function.

As an application we show that our t-spanner can be used to find a BFS tree in a transmission or disk graph for any given start vertex in O(n log(n)) additional time.


Reachability Oracles:
Finally, we compute reachability oracles for transmission graphs. These are data structures that answer reachability queries: given two sites p and q, is there a directed path between them? The quality of an oracle is measured by the space S(n), the query time Q(n), and the preproccesing time. We present three reachability oracles whose quality depends on the radius ratio 
Psi: the first one works only for Psi < sqrt(3) and achieves Q(n) = O(1) with S(n) = O(n) and preprocessing time O(n log(n));
the second data structure gives Q(n) = O(Psi^3 sqrt(n)) and S(n) = O(Psi^3 n^(3/2)); the third data structure is randomized with
Q(n) = O(n^(2/3)(log(n) + log(Psi))) and S(n) = O(n^(5/3)(log(n) + log(Psi))) and answers queries correctly with high probability.

As a second application for our spanners, we employ them to achieve a fast preprocessing time for our reachability oracles.

Kim, Heuna: Congruence testing for Point Sets in 4-Space

Abschluss
PhD
Abgabedatum
20.06.2016
Homepage des Autors

Congruence is the geometric concept of being the same up to rotations and translations
in Euclidean space. As congruence is a fundamental concept in geometry, it has drawn broad
attentions from the computational geometry community for a long time whether the curse of
dimensionality applies to congruence testing. We developed a deterministic optimal-runningtime
algorithm for congruence testing in 4-space.
To understand the importance of the main algorithm in the historical context, we provide a
survey about the computational model and the previous work on congruence testing algorithms.
The crucial ingredients of the algorithm are explained component by component. These include
general 4-dimensional rotations, angles between linear subspaces, and the Plücker embedding. In the sequence of steps in the algorithm, high regularities are forced in the structure of point sets.
This lets us encounter beautiful mathematical structures on a 3-sphere and the symmetry group
of finite points: the Hopf fibration of a 3-sphere and the Coxeter group of four-dimensional point
groups. We also give an elementary and self-contained overview about these two mathematical
topics.
The main algorithm consists of five modules that are interesting in their own right. The
algorithm is complicated and we provide rather pessimistic estimates. This algorithm, however,
can be regarded as a big step forward to constructing a more efficient algorithm in higher
dimensions.
In the same vein, the last part is devoted to the extendability of the algorithm to higher
dimensions. This part concludes with discussing implementability and geometric properties that
the algorithm may imply.

Miltzow, Tillmann: Geometric and Combinatorial Problems of Matching and Partitioning in Theoretical Computer Science

Abschluss
PhD
Abgabedatum
05.06.2015
Homepage des Autors

In this thesis, we consider four problems in theoretical Computer Science:

1.Disjoint Unit Disks in the plane and disjoint unit balls in space can be separated by hyperplanes. Doing his, we try to minimize the number of intersections between the hyperplane and the balls. Although the first papers appeared in the 80s of the last century, up to now there existed no optimal deterministic algorithm to find such a hyperplane. We present an exact algorithm in the plane and approximate algorithm in higher dimensions. (This part is joint work with Michael Hoffman and Vincent Kusters.)

2. Tron is a computer game from the 80s of the last century, which was studied at first by Bodlaender and Kloks on abstract graphs. We answer questions that remained open regarding algorithmic complexity and study the minimal and maximal outcome of the game under the assumptions that both players play optimally. We consider these questions in different game modi.

3. Pareto Optimal Matchings are a concept used in economics and game theory. They describe certain stabil situations similar to Nash-equlibria. They also play a role in some algorithmic questions.
We give upper bounds on the number of Pareto Optimal Matchings under simple conditions. Further, we investigate a series of related algorithmic questions. (This part is joint work with Andrei Asinowski and Balázs Keszegh.)

4. Geometric Matchings are non-crossing segments connecting a set of points in the plane. Although it is very simple to find colored point sets admitting exactly one geometric matching, up to now there has been no characterization of such point sets in general. We give several simple and elegant characterization and answer further questions to this class of points. (This part is joint work with Andrei Asinowski und Günter Rote)

Jaume Deyà, Rafel: Tessellations for Geometric Optimization

Abschluss
PhD
Abgabedatum
20.10.2014
Homepage des Autors

This thesis is concerned with the study of some tessellations (or subdivisions) of the plane or of the space and their relation to some optimization problems. Several of the results have a combinatorial flavor, whereas others are strongly connected to the geometry underlying the corresponding problems. The work combines theoretical statements with applied implications and related algorithms, making use of linear algebra, convex geometry, graph theory and many other tools from discrete and computational geometry. 

Regular subdivisions are tessellations resulting from the projection of the lower faces of a polyhedron. In the first part of this thesis, we generalize regular subdivisions introducing the class of recursively-regular subdivisions. Informally speaking, a recursively-regular subdivision is a subdivision that can be obtained by splitting some faces of a regular subdivision by other regular subdivisions (and continue recursively). We also define the finest regular coarsening and the regularity tree of a subdivision. We derive several properties of these two objects, which reflect certain structure in the class of non-regular subdivisions. In particular, the finest regular coarsening of a subdivision is the regular subdivision that is (in a sense) most similar to it. We show that the class of recursively-regular subdivisions is a proper superclass of the regular subdivisions and a proper subclass of the visibility-acyclic subdivisions (in the sense of an acyclicity theorem by Edelsbrunner). We also show that there exist point sets whose recursively-regular triangulations are not connected by geometric bistellar flips. 

We then derive several algorithms related to the studied objects, and point out applications of the main results. In particular, we present relations to tensegrity theory, data visualization, and graph embedding problems. Special attention is paid to the problem of covering the space by placing given floodlights at given points, for which we extend results known since 1981 and discuss two variants of the original problem. 

The second part is concerned with the study of optimal partial matchings for pairs of point sets under translations. First, we regard the least-squares cost function. The best approach to this problem so far is to construct (and explore) a particular tessellation of the space of translations. In every tile of the tessellation there is one matching that is optimal for any position of the point sets corresponding to a translation in that tile. We give the first non-trivial bound on the complexity of this tessellation in dimensions two and higher, and study several structural properties that lead to algorithms whose running time is polynomial in the size of the larger set. 

We address then the analogous problem under the bottleneck cost function. This cost function assigns to every matching the largest distance defined by a matched pair of points. An associated tessellation is shown to have polynomial complexity. This result, together with graph-theoretical tools, allows us to obtain efficient algorithms for the computation of the corresponding minimum under translations that are sensitive to the size of the smaller of the two sets. The lexicographic variant of the bottleneck cost is analyzed as well. 

Finally, we explore natural directions for the generalization of the problems of matching under translations to which many of our results extend.

Atariah, Dror: Observations and analysis in Configuration Spaces

Abschluss
PhD
Abgabedatum
09.05.2014
Homepage des Autors

This work covers three topics that can all be linked to the
celebrated motion planning problem of planar robots. When
considering a planar robot, which is represented by a convex
polygon, it is natural to study an associated configuration
space, where each point corresponds to a unique placement of the
robot in its physical world. Obstacles in the world of the robot
are represented as three-dimensional solids in the associated
configuration space. The union of these solids is the so-called
forbidden space. Each point of the forbidden space corresponds
to a placement of the robot such that its interior intersects one
or more obstacles. The remainder of the configuration space is
the so-called free space. Of course, the boundary between the
free and forbidden spaces is of tremendous interest.

The first part of the dissertation provides a simple and
geometrically motivated parameterization of the surfaces that
constitute the mentioned boundary. Using this parameterization,
it is easy to produce various visualizations of the boundary.
Furthermore, standard computations that utilize the
parameterization yield deep understanding of the differential
geometry of the boundary. Clearly, these are two achievements
that contribute to the general study of the motion planning
problem. In particular, it is also shown that the elements of
the boundary corresponding to contacts between an edge of the
robot and a vertex of an obstacle are non-developable ruled
surfaces --- thus having a negative Gaussian curvature.

The negatively curved surfaces that emerge as portions of the
discussed boundary, motivated a search for an optimal
triangulation of simpler negatively curved surfaces.
Specifically, hyperbolic paraboloids (also know as saddles) are
considered. The second part of the dissertation provides both
interpolating and non-interpolating triangulations of general
saddle surfaces. The optimality of the yielded triangulation
used for the approximation is twofold:(i) it maintains a fixed
error bound, and (ii) minimizes the number of triangles needed to
cover a given portion of the saddle.

Finally, the third part of the dissertation provides a detailed
account of a contribution to the 2D Arrangements package of
the "Computational Geometry Algorithms Library". This part of
the work considers the computation of arrangements of bounded
piecewise linear curves (polylines) in the plane. More
precisely, the initial package code, as shipped with version 4.3,
was considerably improved in two senses. First, the computations
of arrangements of polylines using the modified code improves
execution time by about 5% (on average). Secondly, the modified
code is much more generic and suitable for further
generalizations. As a result, the improved code was accepted for
integration into the library and is scheduled to be shipped with
its next release. Together, the achievements presented in the
dissertation can contribute to the ongoing study of the motion
planning problem, as well as to numerous more general purposes.

Schlipf, Lena: Stabbing and Covering Geometric Objects in the Plane

Abschluss
PhD
Abgabedatum
11.09.2013
Homepage des Autors

In this thesis, we consider a variety of different geometric covering and stabbing problems in the plane. Stabbing and covering geometric objects are two widely studied problems in computational geometry. These problems are closely related; there are many cases where covering problems are dual to stabbing problems. 

We first study a problem that was posed by Tamir in 1987: "Given a set of geometric objects in the plane, can one decide in polynomial time whether there exists a convex polygon whose boundary stabs every object ?" This boundary is then called a convex stabber. We give an answer to this question by proving that deciding the existence of a convex stabber is NP-hard for many types of geometric objects. Additionally, we consider an optimization version and prove it to be APX-hard for most of the considered objects.

A similar problem is deciding whether geometric objects can be stabbed with the vertices of a rotated, scaled and translated copy of a given polygon. To the best of our knowledge, this problem was not studied so far and we present the first polynomial-time algorithm.

Another stabbing problem studied in this thesis, is the problem of stabbing sequences of geometric objects: Given a distance measure and two sequences of geometric objects, compute two point sequences that stab them under the condition that the distance between these point sequences is as small as possible (using the given distance measure). We state efficient algorithms for this problem where the objects are either line segments or disks and the distance measure is the discrete Fréchet distance. 

Then, we consider covering problems. We study a new version of the two-center problem where the input is a set D of disks in the plane. We first study the problem of finding two smallest congruent disks such that each disk in D intersects one of these two disks. Then, we study the problem of covering the set D by two smallest congruent disks. We also investigate an optimization version. For these problems, we give efficient exact and approximation algorithms. 

Finally, we investigate the problem of computing a largest area rectangle inscribed in a convex polygon on n vertices. If the order of the vertices of the polygon is given, we state approximation algorithms whose running times are only logarithmically dependent on n.

Werner, Daniel: On the computational complexity of some problems from discrete geometry in higher dimensions

Abschluss
PhD
Abgabedatum
18.01.2013
Homepage des Autors

We will investigate computational aspects of several problems from discrete geometry in higher dimensions. In the plane, many of them are well understood and can be solved efficiently, but as the dimension increases, most of them seem to be considerably harder to solve. In this thesis, we make progress towards explaining this phenomenon by showing computational hardness for some of these problems. To this end, we also make use of parameterized complexity theory in order to show stronger relative lower bounds than those possible with classical complexity theory only. For one of the problems, we moreover develop several approximation algorithms. In the process, we pay particular attention to the exact dependence of the running time on the dimension.

We will use and develop different techniques for showing hardness of the problems in unbounded dimension. These include the technique of deconstructing the space into orthogonal planes, into which gadgets are placed. Using this technique, we are able to show a relative lower bound of n^Omega(d) for several problems related to testing the discrepancy of a point set and verifying epsilon-nets.

We then present a more natural reduction technique that reduces from the d-Sum problem to show relative lower bounds for many problems arising from theorems in combinatorial geometry. These include computing minimal Helly sets, certain decision versions of the ham-sandwich problem, and computing the Tverberg depth of a point set.

We then turn to computing a maximum size subset of points in convex position. While all the previous problems admit straightforward n^O(\poly(d)) algorithms in d dimensions, here we are able to show that the problem already becomes hard in 3 dimensions. This shows a strong dichotomy between a low and a higher dimensional case, because in the plane the problem was known to be solvable in polynomial time.

As a positive result, we then consider the problem of computing a point of high Tverberg depth in d dimensions. We present a novel lifting approach that allows us to compute deep points for a point set in high dimension from deep points of its projection to some lower dimensional space. The approach is very generic, and we show how to combine it with other known methods in order to get even better algorithms.

Finally, we give a short outlook and suggest further open problems on the subject.

Dieckmann, Claudia: Symmetry Detection and Approximation

Abschluss
PhD
Abgabedatum
07.12.2012
Homepage des Autors

In this thesis, we will present algorithms to solve the following two closely related problems:

The first problem we will consider is to detect the symmetry group of a two-dimensional object even in the case where its representation is distorted by noise. 

We will derive and analyze algorithms following different approaches in order to solve this problem.
One approach is to use the discrete Fourier transform in order to detect the symmetry group of the object. Here we assume the object to be represented by a gray-level image. The discrete Fourier transform is helpful in finding periodic structures in an input since it decomposes a signal into its fundamental frequencies. We will use this property in order to derive an algorithm which applies the discrete Fourier transform to the gray-level image and uses the result in order to determine the symmetry group of the represented object. The algorithm can be used for detecting finite as well as infinite symmetry groups.

Besides we will investigate a second method based on the probabilistic approach which is also used for solving shape matching problems. For the algorithms based on probabilistic methods we assume the object to be represented by a set of points, a set of polygonal curves or a set of parametrized curves.

The basic idea of these algorithms is to randomly choose two points out of the input set representing the object and compute the transformation (rotation or reflection) mapping the one point to the other. A vote will be generated for the computed transformation in transformation space. This procedure is repeated sufficiently often until dominant clusters in transformation space arise. 
The number of clusters with large numbers of votes refers to the number of symmetries of the object and thus can be used in order to compute the symmetry group of the object represented by the input set.

Both approaches described above result in algorithms which are robust against noise. Thus they derive the correct answers even if the symmetries of the object got lost during the process of computing its representation by a gray-level image or a set of geometric objects, respectively.

After detecting the symmetry group of an object represented by a point set which might be distorted by noise another interesting problem is to find a point set which is symmetric with respect to the symmetries in the detected symmetry group and which is a close approximation of the input point set. The aim is to restore the symmetries which might have got lost during the process of representing the object in such a way that it can be processed by a computer. We assume a symmetric point set to be a close approximation of the input point set if each point of the (non-symmetric) input point set lies in the epsilon-neighborhood of a point in the symmetric point set. We ask this correspondence to be a bijection. This problem is called the epsilon-Symmetry Detection Problem (epsilon-SD problem). 

The epsilon-SD problem was already studied by S. Iwanowski and he proved it to be NP-complete in general. For some restricted versions of the epsilon-SD problem Iwanowski proved the decision problem to be in P. For those we will present polynomial time algorithms solving the corresponding optimization problems. Additionally we will present polynomial time algorithms for some restricted versions which were not considered until now. One possible restriction is to only allow point sets which are well-separated. Iwanowski proved the epsilon-SD problem to be in P in the case where no two points have a distance smaller than 8*epsilon and proved it to be NP-complete in the case where the point set is at most epsilon/2-disjoint. We will improve this result by developing polynomial time algorithms for 4(1+delta)epsilon-disjoint point sets for each delta>0.

Scherfenberg, Marc: Searching point patterns, matching imprecise point patterns, and inducing polygons

Abschluss
PhD
Abgabedatum
27.01.2012
Homepage des Autors

Die Berechnung der Ähnlichkeit zweier geometrischer Figuren ist eine der fundamentalen Aufgaben der algorithmischen Geometrie. In der Regel wird die Ähnlichkeit durch eine reellwertige Abstandsfunktion gemessen. Im Laufe der Zeit sind einige solcher Abstandsfunktionen entwickelt worden, die auf unterschiedlichen Klassen von geometrischen Figuren definiert sind und jeweils verschiedene Aspekte der Ähnlichkeit bewerten. Die bisherigen Algorithmen zur Berechnung dieser Abstandsfunktionen setzen voraus, dass die geometrischen Figuren exakt gegeben sind. Diese Exaktheit ist in den meisten Anwendungen allerdings nicht gegeben, da die Eingabedaten durch Messungen gewonnen werden, die eine bestimmte Ungenauigkeit aufweisen. In vielen Fällen ist die mit der Messung verbundene Ungenauigkeit jedoch bekannt. Zwei wichtige Abstandsmaße zur Bestimmung der Ähnlichkeit geometrischer Formen sind der auf Punktmengen definierte gerichtete und ungerichtete Hausdorff-Abstand sowie der diskrete Fréchet-Abstand, der auf Punktfolgen definiert ist.

Im Rahmen der Dissertation wurden Algorithmen zur Berechnung dieser Abstandsmaße entwickelt, die die bekannten Ungenauigkeiten der Eingabedaten berücksichtigen, um das Supremum und Infimum des Ähnlichkeitswertes unter den im Rahmen der Ungenauigkeiten möglichen Ausprägungen der geometrischen Figuren exakt zu berechnen. Des Weiteren wurde bewiesen, dass in einigen Fällen die Berechnung des Infimums sowie seine Approximation mit einem bestimmten Approximationsfaktor NP-schwer ist. Für einige dieser Fälle wurden Approximationsalgorithmen entwickelt, die zum Teil die theoretisch bestmögliche Approximation erreichen. Die genannten Ergebnisse stellen einen Schwerpunkt der Dissertation dar und werden in diesem Vortrag vorgestellt.

Des Weiteren wird in dem Vortrag kurz auf die weiteren in der Dissertation enthaltenen Ergebnisse eingegangen. Hierzu zählen unter anderem Suchreduktionen, die eine nicht heuristische Nächste-Nachbar-Suche von Punktmustern ermöglichen, sowie die Beantwortung der zuvor offenen Frage, ob jedes Arrangement von n Geraden durch ein einfaches n-Gon induziert werden kann. In der Literatur gibt es viele nicht heuristische Suchstrukturen für einzelne Punkte sowie Suchheuristiken, um geometrische Muster zu suchen, die komplexer als ein einzelner Punkt sind. Die in der Dissertation vorgestellten Suchreduktionen ermöglichen die Reduktion von Suchen nach Punktmustern unter Abstandsmaßen, die nicht die Eigenschaften einer Metrik aufweisen müssen, auf Nächste-Nachbar-Suchen einzelner Punkte durch Einbettung in einen metrischen Raum. Für einige Abstandsmaße kann die benutzte Einbettung so erweitert werden, dass die Suchreduktion zu translationsinvarianten Suchen führt. Für die meisten betrachteten Abstandsmaße ermöglicht die Suchreduktion erstmals eine nicht heuristische Nächste-Nachbar-Suche unter diesem Abstandsmaß.

Schymura, Daria: Probabilistic Matching of Solid Shapes in Arbitrary Dimension

Abschluss
PhD
Abgabedatum
18.11.2011
Homepage des Autors

Determining the similarity between objects is a fundamental problem in computer vision and pattern recognition, but also in other fields of computer science. This thesis concentrates on the matching problem, which has received a lot of attention in Computational Geometry.

Given a class of shapes S, a set of transformations T, mapping shapes onto shapes, and a distance measure d on S, the matching problem with respect to S, T, and d is defined as follows: Given two shapes A, B in S, compute a transformation t* in T that minimizes d(t*(A),B).

We consider solid shapes, i.e., full-dimensional shapes, in arbitrary dimension and assume that they are given by an oracle that generates uniformly distributed random points from the shapes. This is a very rich class of shapes that contains the class of finite unions of simplices as a subclass. We study matching under translations and rigid motions (translation and rotation). Throughout this work, the volume of the symmetric difference is used as distance measure for the matching problem. Maximizing the volume of the overlap is equivalent to minimizing the volume of the symmetric difference under translations and rigid motions.

We study a probabilistic approach to the shape matching problem. The main idea is quite simple. Given two shapes A and B, repeat the following random experiment very often: Select random point samples of appropriate size from each shape and compute a transformation that maps the point sample of one shape to the sample of the other shape. Store this transformation. In each step, we extend the collection of random transformations by one. Clusters in the transformation space indicate transformations that map large parts of the shapes onto each other. We determine a densest cluster and output its center.

This thesis describes probabilistic algorithms for matching solid shapes in arbitrary dimension under translations and rigid motions. The algorithms are a priori heuristics. The main focus is on analyzing them and on proving that they maximize the volume of overlap approximately by solving the following instance of the matching problem. Given two solid shapes A and B, an error tolerance eps in (0,1), and an allowed probability of failure p in (0,1), the problem is to compute a transformation t* such that with probability at least 1-p, we have that the volume of the intersection of t*(A) and B is at least as large as the volume of the intersection of t(A) and B minus an error of e times the volume of A for all transformations t, in particular for transformations maximizing the volume of overlap.

The approach is mainly of theoretical interest. Still, the algorithms are so simple that they can easily be implemented, which we show by giving experimental results of a test implementation for 2-dimensional shapes.

Scholz, Sven: Visual shape similarity and retrieval of figurative images

Abschluss
PhD
Abgabedatum
18.05.2011
Homepage des Autors

The goal of the present work was to develop a system for automated similarity retrieval of figurative images---especially trademark images---which gives results that resemble human similarity estimation. 

In the first chapter, findings about the peculiarities of the perception of images and about human similarity estimation are compiled and the special needs of similarity retrieval of trademark images are explained.

As the depicted shapes play an important role for the estimation of similarity, an approach for the detection of the shapes has been developed. It encounters that shapes may be depicted in different ways (by regions, using textures, by contour lines) and that images often contain compression artefacts and noise.

For the estimation of the similarity of images based on the detected shapes, an approach has been developed that, in a first stage, computes transformations which map the images and, in a second stage, compares the mapped images. For the computation of the transformations an existing randomized approach has been enhanced. It chooses appropriate transformations based on collecting votes. For the comparison of the mapped images a new similarity measure on the contour lines has been developed which takes the correspondences in position and direction into account.

Based on these components a system for similarity retrieval has been developed which also considers the special needs of similarity retrieval of trademark images. The experimental results show a high conformance with human similarity estimation. The results are significantly better than the ones achieved by existing systems.

Schmidt, Jens M.: Certifying Graph Algorithms and 3-Connectedness

Abschluss
PhD
Abgabedatum
18.03.2011
Homepage des Autors

The class of 3-connected (i.e., 3-vertex-connected) graphs has been studied intensively for many reasons in the past 50 years. One algorithmic reason is that graph problems can often be reduced to handle only 3-connected graphs; applications include problems in graph drawing, problems related to planarity and online problems on planar graphs.

It is possible to test a graph on being 3-connected in linear time. However, the linear-time algorithms known are complicated and difficult to implement. For that reason, it is essential to check implementations of these algorithms to be correct. A way to check the correctness of an algorithm for every instance is to make it certifying, i. e., to enhance its output by an easy-to-verify certificate of correctness for that output. However, surprisingly few work has been devoted to find certifying algorithms that test 3-connectivity; in fact, the currently fastest algorithms need quadratic time.

Two classic results in graph theory due to Barnette, Grünbaum and Tutte show that 3-connected graphs can be characterized by the existence of certain inductively defined constructions. We give new variants of these constructions, relate these to already existing ones and show how they can be exploited algorithmically. Our main result is a linear-time certifying algorithm for testing 3-connectivity, which is based on these constructions. This yields also simple certifying algorithms in linear time for 2-connectivity, 2-edge-connectivity and 3-edge-connectivity. We conclude this thesis by a structural result that shows that one of the constructions is preserved when being applied to depth-first trees of the graph only.

Stehn, Fabian: Geometric Hybrid Registration

Abschluss
PhD
Abgabedatum
14.02.2011
Homepage des Autors

Geometric matching problems are among the most intensely studied fields in Computational Geometry. A geometric matching problem can be formulated as follows: given are two geometric objects P and Q. These objects are taken from a class of geometric objects G and P is called the "pattern" and Q is called the "model". A geometric matching instance is defined for a distance measure dist and a transformation class T. The task is to find the transformations t of T that minimizes dist(t(P),Q).

In this thesis, geometric hybrid registration problems are studied. Registration problems are closely related to geometric matching problems. The term geometric registration problem describes the task of mapping points from one space ("pattern space") to their corresponding points in a deformed copy of that space called "model space".

This research is motivated by a real world application: navigated surgery. Here, the goal is to register an operation theatre space (pattern space) to the internal coordinate system (model space) of a medical navigation system. The purpose of a medical navigation system is to support surgeons by visualizing the used surgical instruments at their correct position in a 3D-model of a patient. The models are generated beforehand based on CT or MRT scans.

Hybrid registration is a novel strategy to compute solutions for this alignment problem. Geometric hybrid registrations reduce the spatial synchronization problem to a series of (at least two) geometric matching problems that are solved interdependently. Usually, a computationally involved point-to-surface matching is combined with a comparably simpler but underdefined point-to-point matching. The point-to-surface matching is computed for a sufficiently large and suitably distributed set of points (called surface points) measured in the pattern space to a geometric surface in the model space. For the point-to-point matching, a small set of (one to three) characteristic points are measured in the pattern space and are defined in the model space. In the context of the intended application, these points are called anatomic landmarks - anatomically exposed spots within the field of interest.

Sturm, Astrid: Geometric Approximations in Two- and Three Dimensional Space

Abschluss
PhD
Abgabedatum
27.11.2009
Homepage des Autors

The goal of geometric approximation is to replace a given complex geometric object by a simpler object while capturing the significant features of the original. 
In the first part of the thesis we deal with approximating polygonal curves in 2-dimensional space. For a polygonal curve this approximation can be done either by a simpler polygonal curve (a curve with less segments) or by a higher order curve. We were able to compute an approximation of open polygonal curves with the minimum number of circular arcs (chapter 1) and also with the minimum number of biarcs (chapter 2) for a given upper error bound epsilon. 
In the second part of the thesis we move on the 3-dimensional space and to polytope approximations. 
We initiate the study of this problem by considering convex surfaces only, for simplicity, before moving on to non convex surfaces. 
A first natural step to higher order approximation of convex polytopes is the approximation with spheres or spherical patches. 
In chapter 3 we can show that deciding the existence of an approximation of a convex polytope with a given upper error bound epsilon and not more than a given number of spherical patches is NP-hard. 
In chapter 4 we present a new technique for constructing a curved surfaces based on inscribed polytopes resulting in a convex surface consisting of spherical patches. 
To tackle the approximation problem for non-convex polytopes we pick up the idea of an incremental approximation algorithm introduced in chapter 4. This induces the problem of finding a simple and topological correct start polytope, the seed polytope, for non-convex polytopes. 
In chapter 5 we describe how to construct for a surface in 3D space, given by sample points S, a coarse approximating polytope P that uses a subset of the points as vertices and preserves the topology. In contrast to surface reconstruction we do not use all sample points, but we try to use as few points as possible. 
We also give a short introduction how to use the results for the seed polytope generation for surface reconstruction.

Scharf, Ludmila: Probabilistic Matching of Planar Shapes

Abschluss
PhD
Abgabedatum
05.06.2009
Homepage des Autors

In this thesis we study a probabilistic approach for the shape matching problem. The studied approach is based on an intuitive definition of the shape matching task: Given two shapes A and B find that transformation within the class of allowable transformations which maps B to A in a best possible way. A mapping is considered to be good if large parts of the two shapes coincide within some tolerance distance delta. 

We assume that the shapes are modeled by finite sets of rectifiable curves in the plane. As possible classes of transformations we consider sub-classes of affine transformations: translations, rigid motions (translations and rotations), similarity maps (translation, rotation, and scaling), homotheties (translation and scaling), shear transformations, and affine maps. 

The major idea of the probabilistic algorithm is to take random samples of points from both shapes and give a "vote" for that transformation matching one sample with the other. If that experiment is repeated frequently, we obtain by the votes a certain probability distribution in the space of transformations. Maxima of this distribution indicate which transformations give the best match between the two figures. The matching step of the algorithm is, therefore, a voting scheme. 

In this thesis we analyze the similarity measure underlying the algorithm and give rigorous bounds on the runtime (number of experiments) necessary to obtain the optimal match within a certain approximation factor with a prespecified probability. We perform a generic analysis of the algorithm for arbitrary transformation classes, as well as an in-depth analysis for different sub-classes of affine transformations. It is also shown that the density function of the vote distribution is exactly the normalized generalized Radon transform in the cases of translations and rigid motions. 

We consider the theoretical analysis as the major contribution of this thesis, since it leads to a better understanding of this kind of heuristic techniques.

Dimitrov, Darko: Geometric Applications of Principal Component Analysis

Abschluss
PhD
Abgabedatum
08.12.2008
Homepage des Autors

Bounding boxes are used in many applications for simplification of point sets or complex shapes. For example, in computer graphics, bounding boxes are used to maintain hierarchical data structures for fast rendering of a scene or for collision detection. Additional applications include those in shape analysis and shape simplification, or in statistics, for storing and performing range-search queries on a large database of samples. 

A frequently used heuristic for computing a bounding box of a set of points is based on principal component analysis. The principal components of the point set define the axes of the bounding box. Once the axis directions are given, the dimension of the bounding box is easily found by the extreme values of the projection of the points on the corresponding axis. Computing a PCA bounding box of a discrete point set in $\mathbb{R}^d$ depends linearly on the number of points. The popularity of this heuristic, besides its speed, lies in its easy implementation and in the fact that usually PCA bounding boxes are tight-fitting.

In this thesis we investigate the quality of the PCA bounding boxes. We give bounds on the worst case ratio of the volume of the PCA bounding box and the volume of the minimum volume bounding box. We present examples of point sets in the plane, where the worst case ratio tends to infinity. In these examples some dense point clusters have a big influence on the directions of the principal components, and the resulting PCA bounding boxes have much larger volumes than the minimal ones. To avoid the influence of such non-uniform distributions of the point sets, we consider PCA bounding boxes for continuous sets, especially for the convex hulls of point sets, obtaining several variants of continuous PCA. For those variants, we give lower bounds in arbitrary dimension, and upper bounds in $\mathbb{R}^2$ and $\mathbb{R}^3$. To obtain the lower bounds, we exploit a relation between the perfect reflective symmetry and the principal components of point sets. Each of the upper bounds in $\mathbb{R}^2$ and $\mathbb{R}^3$ is obtained from two parameterized bounds. The first bound is general for all bounding boxes, while to obtain the second bound, we exploit some of the properties of PCA, combining them with ideas from discrete geometry and integral calculus.

The relation between the perfect reflective symmetry and the principal components of point sets, leads to a straightforward algorithm for computing the planes of symmetry of perfect and approximate reflective symmetric point sets. For the same purpose, we
present an algorithm based on geometric hashing.

Denner-Broser, Britta: Complex Tracing

Abschluss
PhD
Abgabedatum
01.12.2008
Homepage des Autors

Dynamic Geometry is the field of interactively doing geometric constructions
using a computer. Usually, the classical ruler-and-compass constructions are
considered. The available tools are simulated by the computer. A Dynamic Geometry
System is a system to do geometric constructions that has a drag mode.
In the drag mode, geometric elements with at least one degree of freedom can
be moved, and the remaining part of the geometric construction adjusts automatically.
Thus, the computer has to trace the paths of the involved geometric
objects during the motion.

In this thesis, we focus on the beautiful model by Kortenkamp and Richter-Gebert
that is the foundation of the geometry software Cinderella. We embed an algebraic
variant of this model into different fields of pure and applied mathematics,
which leads to different approaches for realizing the drag mode practically. We
develop a numerical method to solve the Tracing Problem that is based on a
generic Predictor-Corrector method. Like most numerical methods, this method
cannot guarantee the correctness of the computed solution curve, hence ambiguities
are not treated satisfactorily. To overcome this problem, we develope
a second algorithm that uses interval analysis. This algorithm is robust, and
the computed step length is small enough to break up all ambiguities. Critical
points are bypassed by detours, where the geometric objects or the corresponding
variables in the algebraic model can have complex coordinates. Here, the final
configuration depends essentially on the chosen detour, but this procedure due to
Kortenkamp and Richter-Gebert leads to a consistent treatment of degeneracies.
We investigate the connection of the used model for Dynamic Geometry to Riemann
surfaces of algebraic functions.

Lenz, Tobias: Simple Reconstruction of Non-Simple Curves and Approximating the Median in Streams with Constant Storage

Abschluss
PhD
Abgabedatum
28.10.2008
Homepage des Autors

This work generalizes the ideas in the Nearest-Neighbor-Crust algorithm by Dey and Kumar. It allows to reconstruct smooth, closed curves from ε-samples with ε ≤ 0.48. This is a big improvement compared to the original bound. Further generalization leads to a new algorithm which reconstructs closed curves with self-intersections. The algorithm is very simple and short and works well in practice. A special ε-sampling condition is given which guarantees correct results. The described method works for curves in any dimension d in O(n^(2−1/d)) time.
---
In this part the well-known problem of finding the median of an ordered set is studied under a very restrictive streaming model with sequential readonly access to the data. Only a constant number of reference objects from the stream can be stored for comparison with subsequent stream elements. A first non-trivial bound of Omega(√n) distance to the extrema of the set is presented for a single pass over streams which do not reveal their total size n. This result is extended to an algorithm which guarantees a distance of Omega(n^(1−ε)) to the extrema. Additional results about upper bounds, multi-pass algorithms, and arbitrary quantiles are presented.

Schulz, André: Lifting Planar Graphs to Realize Integral 3-Polytopes and Topics in Pseudo-Triangulations

Abschluss
PhD
Abgabedatum
05.06.2008
Homepage des Autors

Lifting planar embeddings with equilibrium stress is a well known method that dates back to the 19th century. We discuss the known theory about liftings and develop a framework that allows us to apply the lifting technique easily. In this thesis we apply the lifting method to different geometric problems.

As a first application we show how to embed 3-polytopes with small integer coordinates. Our method improves the upper bound for the size of the largest coordinate from O(2^{18n^2}) to O(2^{7.55n}). A new generalized version of Tutte's
spring embedding assures that a planar 2d-embedding contains an
equilibrium stress and is therefore liftable. We point out 
connections between the size of the integral embedding and the number of
maximal spanning forests a planar graph can have.

The second field of applications for the lifting technique are topics
about pseudo-triangulations. 
Our main observation shows how to model regular 
triangulations as linear programs over the polytope of pointed
pseudo-triangulations. 
We introduce an equivalent of the Delaunay triangulation for pointed
pseudo-triangulations of simple polygons. 
Our approach is motivated by the paraboloid lifting of the Delaunay
triangulation and the generalization of linear programs that
compute the Delaunay triangulation in special cases.
We also investigate the so-called canonical pointed pseudo-triangulation and study some of its
geometric properties. Our observations lead to a new characterization
of pointed pseudo-triangulations as embeddings of minimal rigid graphs
that can balance a given load with positive edge weights. 

The thesis contains also results on pseudo-triangulation problems that
were not obtained with help of liftings. We show that a sequence of 
super-polynomial many convexifying flips exists that transform a lifted
pseudo-triangulation into a maximal locally convex surface. This is
obtained by constructing a simple polygon that
realizes an improving flip sequence of length n^{\Theta(\log n)} between two of its
pointed pseudo-triangulation.
Furthermore we show that (1) it is NP-hard to decide if a graph
contains a pseudo-triangulation and (2) it is NP-hard to decide if a
graph can be extended to a pseudo-triangulation with small vertex degree. Both
decision problems are studied in different incarnations.
We obtain a new and easier NP-completeness proof of the triangulation 
existence problem, one of the classic NP-complete triangulation problems.

Klein, Oliver: Matching Shapes with a Reference Point

Abschluss
PhD
Abgabedatum
21.04.2008
Homepage des Autors

Shape matching is an important topic in computational geometry, 
computer vision, image retrieval, object recognition and robotics. For a fixed 
distance measure D and a class of transformations T we can describe the 
problem as follows: Given two shapes A and B in R^d, find a 
transformation T^* in T, such that the distance between A and T^*(B) is 
minimal among all transformations in T. Usually finding such a transformation 
is computationally expensive, if at all possible. Thus we concentrate on 
approximation algorithms. We follow an approach by Alt, Behrends and Blömer, 
and Alt, Aichholzer and Rote. They use mappings called reference points to fix 
the relative position between the two sets. A reference point is a 
Lipschitz continuous mapping from the set of shapes into R^d which is 
equivariant under the considered class of transformations.
This approach reduces the degrees of freedom of the underlying problem
by the dimension d.

In this thesis we study approximation algorithms for shape matching
with respect to various metrics, e.g., the Hausdorff distance, the
Earth Mover's Distance, the Monge-Kantorovich Distance and the
bottleneck distance. We investigate translations, rigid motions,
i.e., combinations of translations and rotations, and similarities,
i.e., combinations of rigid motions and scalings.

The basic structure of the approximation algorithms is the same for all
metrics and we describe our approach in an abstract reference point
framework. We first determine the relative position of the two shapes
to each other by computing their reference points. We then translate
the shapes such that the reference points coincide. Next we
determine a rotation for one of the shapes such that the distance of
the two shapes is at most a constant times
their optimal distance. Similarities can always
be dealt with by finding an approximate scaling before finding the
rotation.

Buchin, Kevin: Organizing Point Sets: Space-Filling Curves, Delaunay Tessellations of Random Point Sets, and Flow Complexes

Abschluss
PhD
Abgabedatum
19.12.2007
Homepage des Autors

In this thesis we develop and analyze algorithms for computing space-filling curve orders, Delaunay Tessellations and flow complexes of point sets. For space-filling curve orders and Delaunay Tessellations the emphasis lies on an average-case analysis of the algorithms. For flow complexes the emphasis lies on their computation in higher dimensions. In a space-filling curve order of a point set, points which are close in the order are also close in space. We discuss algorithms for computing space-filling curve orders based on radix sort. We give an average-case analysis which shows that these orders can be computed in linear expected time for many point distributions. As discrete counterparts of space-filling curves we consider grid traversals and discuss finding optimal grid traversals for different locality measures using heuristics for the quadratic assignment problem. The Delaunay Tessellation of a point set is a simplicial complex capturing proximity relations of the points. We analyze incremental constructions of Delaunay Tessellations along space-filling curve orders. First we give a generalized and refined analysis of incremental constructions con BRIO, i.e., where points are inserted in random rounds. Based on this we analyze incremental constructions along space-filling curve orders for uniformly distributed points from a bounded convex region in the plane, normally distributed points in the plane, and uniformly distributed points from a d-cube in higher dimensions. In the first case we analyze the expected structure of the Delaunay Tessellation and in the other cases the structure of the space-filling curve order. We show for these point distributions that incremental constructions con BRIO of Delaunay Tessellations run in linear expected time using space-filling curve orders. The flow complex of a point set is the collection of stable manifolds of the flow induced by the distance function of the point set. We give an algorithm for computing the flow complex in higher dimensions. The algorithm is based on the Delaunay Tessellation and Voronoi Diagram of the point set and the recursive nature of the flow. Based on this algorithm we give a topological analysis of flow shapes, i.e., the underlying spaces of subcomplexes of the flow complex. In particular we show that flow shapes are homotopy equivalent to the corresponding unions of balls.

Buchin, Maike: On the computability of the Frechet distance between triangulated surfaces

Abschluss
PhD
Abgabedatum
23.04.2007
Homepage des Autors

The Frechet distance is a metric for parameterized curves and surfaces. It is used in shape matching for measuring the similarity of geometric shapes. For polygonal curves, it can be computed in polynomial time. For triangulated surfaces, deciding whether the Frechet distance between two surfaces is less than or equal a given threshold is NP-hard. It is not known, whether the Frechet distance between triangulated surfaces is computable. In this thesis, we study the computability of the Frechet distance between triangulated surfaces. We give three partial answers to the question whether it is computable. For triangulated surfaces, we show that the Frechet distance is semi-computable, a weaker notion of computability. For a variant of the Frechet distance, the weak Frechet distance, we show that it is polynomial time computable for triangulated surfaces. For a restricted class of surfaces, simple polygons, we show that the Frechet distance is polynomial time computable. Finally, we study a related question, the definition of a summed or average Frechet distance between curves. We show that none of several intuitive definitions fulfill the triangle inequality.

Ribó Mor, Ares: Realization and Counting Problems for Planar Structures: Trees and Linkages, Polytopes and Polyominoes

Abschluss
PhD
Abgabedatum
15.02.2006
Homepage des Autors

Part I of my thesis is about planar linkages. We consider motions of linkages that avoid crossings of bars. We study problems related to self-touching frameworks, in which multiple edges converge to geometrically overlapping configurations. Chapter 2 is about the unfoldability of trees. We show that every monotone tree is unfoldable. A δ-perturbation of a self-touching configuration is a repositioning of the vertices within disks of radius δ, which is consistent with the combinatorial embedding in R2. In Chapter 3 we prove that every self-touching configuration can be perturbed within δ. The classical Maxwell-Cremona Theorem is a powerful tool that establishes a bijection between the set of classical equilibrium stresses of a planar configuration and the set of three-dimensional polyhedral terrains that project onto it. In Chapter 4 we present a generalization of the Maxwell-Cremona Correspondence for self-touching configurations and generalized polyhedral terrains.

Part II is about the number of spanning trees of a planar graph with applications to the embedding of polytopes on small integer grids using the Maxwell-Cremona lifting. In Chapter 5 we give lower and upper bounds for the maximum number of spanning trees. We present a new method based on transfer matrices for computing the asymptotic number of spanning trees of recursively constructible families of graphs. We discuss several techniques for obtaining upper bounds. Apart from the general case, we study the particular cases when the graph has smallest face cycle 4 and 5, for which the best results are obtained using a probabilistic method. These results are used in Chapter 6 for obtaining improved bounds on the minimum size of the integral grid in which all combinatorial types of 3-polytopes can be embedded.

In Part III we analyze, using numerical methods, the growth in the number of polyominoes on a twisted cylinder as the number of cells increases. These polyominoes are related to classical polyominoes (connected subsets of a square grid) that lie in the plane. We thus obtain improved lower bounds on the growth rate of the number of these polyominoes, which is also known as Klarner's constant.

Wenk, Carola: Geometrische Mustererkennung in höheren Dimensionen

Abschluss
PhD
Abgabedatum
16.12.2002
Homepage des Autors

The comparison of geometric shapes is a task which naturally arises in many applications, such as in computer vision, computer aided design, robotics, medical imaging, etc. Usually geometric shapes are represented by a number of simple objects (sites) that either describe the boundary of the shape, or the whole shape itself. Sites are often chosen to be linear objects, such as line segments, triangles, or simplices in general, since linear objects are easier to handle in algorithms. But sometimes also patches of algebraic curves or surfaces, such as circular arcs or portions of spheres or cylinders are of interest. In order to compare two shapes we need to have a notion of similarity or dissimilarity, which arises from the desired application. There is a large variety of different similarity measures. Popular similarity notions are, for example, the Hausdorff distance, the area of symmetric difference, or especially for curves the turn-angle distance, or the Fréchet distance. The application usually supplies a distance measure, and furthermore a set of allowed transformations, and the task is to find a transformation that, when applied to the first object, minimizes the distance to the second one. Typical transformation classes are translations, rotations, and rigid motions (which are combinations of translations and rotations).

The contribution of this thesis consists of several algorithms for matching simplicial shapes in dimensions d >= 2. The shapes are either represented as sets of simplicial objects or as polygonal curves with a given parametrization. The considered distance measures are mainly the Hausdorff distance and the Fréchet distance. In the literature most matching algorithms either attack two-dimensional problems, or consider finite point sets in higher dimensions. In the first half of this thesis we present results for the Hausdorff distance in d >= 2dimensions under translations, for a rather general notion of simplicial shapes, as well as for some special shape classes which allow to speed up the computations. In the second half of this thesis we investigate the Fréchet distance for polygonal curves. The Fréchet distance is a natural distance measure for curves, but has not been investigated much in the literature. We present the first algorithms to optimize the Fréchet distance under various transformation classes for polygonal curves in arbitrary dimensions. In the last chapter we consider a partial matching variant in which a geometric graph and another curve are given, and we show how to find a polygonal path in the graph which minimizes the Fréchet distance to the curve.

Heinrich-Litan, Laura: Exact L_infinity Nearest Neighbor Search in High Dimensions

Abschluss
PhD
Abgabedatum
04.11.2002
Homepage des Autors

In this thesis we consider the nearest-neighbor problem, which is defined as follows: given a fixed set P of n data points in some metric space X, build a data structure such that for each given query point q a data point from P closest to q can be found efficiently. The underlying metric space is usually the d-dimensional real space Rd together with one of the Lp-metrics, 1<= p <=∞. In many applications, the dimension d of the search space is quite high and can reach several hundreds or even several thousands. Therefore, running times and storage requirements exponential in d are prohibitive in these cases. Because of their exponential dependence on the dimension, all known techniques for exact nearest-neighbor problem are in fact in high dimensions not competitive with the brute-force method, which just determines the distance of q to each point in P and selects the minimum. 
This thesis presents algorithms for solving the high-dimensional exact nearest-neighbor problem with respect to the L-distance. We analyze the average-case situation when the data points are chosen independently at random under uniform distribution. The algorithms considerably improve the brute-force method, they are simple and easy to implement. 
In Chapter 2 we consider query algorithms that need no preprocessing and require storage only for the point set P. Their average running time is O( n+(nd / ln(n)) )
In Chapter 3 we present two strategies which speed up the search by using preprocessing. The query algorithm introduced in Section 3.1.2 requires linear storage and has an expected running time of O(n ln(d / ln( n)+1)+n). The data structure developed in Section 3.2 is based on a preprocessed partition of the data set into sequences, which are monotone with respect to some of the dimensions. The query algorithm has an expected running time of O( √dn1-1/√dln(n)) for dimensions d<(ln(n)/ln(ln(n)))2
Chapter 4 presents several generalizations, in particular to the important problem of finding the k nearest neighbors to a query point. We generalize the analysis of the considered algorithms to other "well-behaved" probability distributions. Furthermore, we develop extensions of the algorithms which work efficiently in the external-memory model of computation. 
In Chapter 5 we present a method which provides tradeoffs between the space complexity of the data structure and the time complexity of the query algorithm.

Knauer, Christian: Algorithms for Comparing Geometric Patterns

Abschluss
PhD
Abgabedatum
02.05.2002
Homepage des Autors

Das geometrische Mustererkennungsproblem besteht darin, zu zwei gegebenen Mustern P und Q aus einer Menge von zulässigen Mustern Π und einem Abstandsmaß δ für solche Muster eine Transformation τ aus einer Menge von zulässigen Transformationen T zu finden, so daß der Abstand δ(τ(P),Q) möglichst klein wird. In der Dissertation werden effiziente Algorithmen für verschiedenen Varianten dieser Problemstellung vorgestellt; die Ergebnisse lassen sich anhand der Art der zulässigen Muster gruppieren: Punktmuster im Rd: Im ersten Teil betrachten wir Punktmuster; die Figuren sind Mengen von m bzw. n Punkten im Rd. Kongruenztest: Wir geben einen Algorithmus an, der in O(n ⌈d/3 ⌉ log n) Zeit (m < n) entscheidet, ob es eine Kongruenzabbildung gibt, die P auf Q abbildet (dies kann als Spezialfall der Mustererkennungsproblems betrachtet werden, bei dem das Abstandsmaß die triviale Metrik ist). Hausdorff-Abstand: Wir beschreiben den ersten nicht-trivialen Algorithmus zur Berechnung des gerichteten Hausdorff-Abstandes h(P,Q) von einer Menge von Punkten P zu einer Menge von semialgebraischen Mengen konstanter Beschreibungskomplexität Q (dies kann als Spezialfall der Mustererkennungsproblems betrachtet werden, bei dem die Identität die einzige zulässige Transformation ist); die Laufzeit ist Oε(m nε log m+m1+ε-1/(2d-2) n). Planare Kurven: Der zweite Teil beschäftigt sich mit Mustererkennungsproblemen für polygonale Kurven in der Ebene; die Figuren sind Polygonzüge P,Q mit m bzw. n Ecken und als Abstandsmaß betrachten wir den Frechét-Abstand F(P,Q) von P und Q. Matching unter Translationen: Wir entwickeln den ersten Algorithmus, der das Matchingproblem für Polygonzüge bezüglich des Frechét-Abstandes unter Translationen löst; die Laufzeit ist O((m n)3(m+ n)2). Außerdem geben wir einen O(ε-2 m n) Approximationsalgorithmus der Güte (1+ε) für dieses Problem an, indem wir Referenzpunktmethoden verallgemeinern. Weiterhin zeigen wir, daß es für affine Abbildungen keine solchen Referenzpunkte gibt. Hausdorff- vs. Frechét-Abstand: Wir zeigen, daß für eine gewisse Klasse von Kurven ein linearer Zusammenhang zwischen dem Frechét- und dem Hausdorff-Abstand besteht. Für diese Art von Kurven geben wir einen O((m+ n) log2(m+ n) 2α(m+ n)) Approximationsalgorithmus zur Berechnung von F(P,Q) an. Schließlich beschreiben wir den ersten nicht-trivialen Algorithmus um solche Kurven zu erkennen; die Laufzeit ist O(n log2 n). Einfache polyedrische Flächen im R3: Im letzten Teil betrachten wir Muster die sich aus Mengen P und Q von m bzw. n disjunkten Dreiecken im R3 zusammensetzen. Hausdorff-Abstand: Wir entwickeln einen Algorithmus, der H(P,Q), den Hausdorff-Abstand zwischen P und Q, in Oε((m n)15/16+ε (m17/16+ n17/16)) Zeit berechnet.

Wolff, Alexander: Beschriftung von Landkarten

Betreuer
Abschluss
PhD
Abgabedatum
28.05.1999
Homepage des Autors

Wolfers, Barbara: Oberflächenrekonstruktion aus Querschnitten

Betreuer
Abschluss
PhD
Abgabedatum
29.11.1996
Homepage des Autors

Alberts, David: Dynamic Connectivity in Graphs: Theory and Practice

Betreuer
Abschluss
PhD
Abgabedatum
15.02.1996
Homepage des Autors

Referenzen
  1. D. Alberts. Implementation of the dynamic connectivity algorithm by Monika Rauch Henzinger and Valerie King. TR B 95-10, Freie Universität Berlin, Inst. f. Informatik, 1995.
  2. D. Alberts, G. Cattaneo, G. F. Italiano. An empirical study of dynamic graph algorithms. In Proc. 7th SODA, p. 192 - 201, 1996.
  3. D. Alberts, M. R. Henzinger. Average case analysis of dynamic graph algorithms. In Proc. 6th  SODA, p. 312 - 321, 1995. (Technical Report)
  4. D. Eppstein, Z. Galil, G. F. Italiano, A. Nissenzweig. Sparsification - a technique for speeding up dynamic graph algorithms. In Proc. 33rd FOCS,p. 60 - 69, 1992.
  5. G. N. Frederickson. Data structures for on-line updating of minimum spanning trees, with applications. SIAM J. Comput., 14:781 - 798, 1985.
  6. M. R. Henzinger, V. King. Randomized dynamic graph algorithms with polylogarithmic time per operartion. In Proc. 27th STOC, p. 519 - 527, 1995.

Gärtner, Bernd: Randomized Optimization by Simplex-Type Methods

Betreuer
Abschluss
PhD
Abgabedatum
01.12.1995
Homepage des Autors

Der bekannteste Optimierungsalgorithmus für das bekannteste Optimierungsproblem ist der Simplex-Algorithmus für Lineares Programmieren.

(LP) maximiere eine lineare Funktion in d Variablen, wobei die Variablen n lineare Ungleichungen erfüllen müssen.

Für die algorithmische Geometrie sind besonders Probleme in kleinen oder sogar konstanten Dimensionen (d = 2,3) interessant, und man interessiert sich deshalb vornehmlich für die Laufzeit von Verfahren als Funktion von n. Für den simplex-Algorithmus (in einer speziellen Variante) gab es dabei seit 1992 sehr interessante Entwicklungen. Es konnte gezeigt werden, daß er optimale Laufzeit O(n) hat (allerdings mit exponentieller Abhängigkeit von d). Wichtiger noch, es stellte sich heraus, daß mit einem einheitlichen, simplex-artigen Algorithmus und gleichen

Zeitschranken auch viele andere Probleme gelöst werden können, für die Linearzeit-Lösungen teilweise nicht bekannt oder nur kompliziert realisierbar waren. Zwei Beispiele sind

(MINIBALL): Finde die kleinste Kugel, die n gegebene Punkte im d-dimensionalen Raum enthält.

(POLYDIST): Gegeben zwei Polytope mit zusammen n Ecken im d-dimensionalen Raum, finde den kürzesten Abstand zwischen den Polytopen.

Zusätzlich konnte das Verhalten in d verbessert werden - es ist `nur' noch exponentiell in sqrt(d).

Die Dissertation beschäftigt sich mit mehreren Aspekten dieser Entwicklung.

  1. Welche abstrakten Klassen von Problemen lassen sich simplex-artig lösen? Es zeigt sich, daß man erstaunlich wenig Eigenschaften benötigt, um effiziente Verfahren zu erhalten.
  2. Welche Rolle spielt Randomisierung? Das oben erwähnte O(n) Verfahren verwendet Münzwürfe. Das hat den Effekt, daß man es nicht `hereinlegen' und zu sehr schlechter, exponentieller, Laufzeit in d verführen kann, wie das für viele deterministische Verfahren geht.
  3. Kann man randomisierte Verfahren im schwächeren Sinne vielleicht doch hereinlegen? Mit anderen Worten, gibt es Eingaben, so daß die Algorithmen nicht polynomiell in n und d sind? Das ist nicht bekannt, und die Analyse ist wesentlich schwieriger als im deterministischen Fall. Die Frage ist eine der heißen offenen Probleme auf diesem Forschungsgebiet.

Wernisch, Lorenz: Dominanzprobleme für Punktmengen

Abschluss
PhD
Abgabedatum
01.06.1994
Homepage des Autors

Valtr, Pavel: Planar point sets with bounded ratios of distances

Betreuer
Abschluss
PhD
Abgabedatum
01.01.1994
Homepage des Autors

Blömer, Johannes: Simplifying Expressions Involving Radicals

Abschluss
PhD
Abgabedatum
01.07.1993
Homepage des Autors