In a nutshell
I was born in Mexico City, where I grew up and lived until the age of 25. I practiced rowing for about 9 years, of which about 5 were at international competitive level for Mexico's national team. During that time I started studying Physics at the Faculty of Sciences at UNAM, but later on I decided to change to Computer Science. My bachelor thesis was supervised by Prof. Dr. Jorge Urrutia. I obtained a master degree in Computer Science from Saarland University, in Saarbrücken, Germany, where my thesis was supervised by Prof. Dr. Raimund Seidel, who remained as my Ph.D supervisor.
I am currently a Senior Software Engineer at Siemens EDA. I have been a Senior Software Engineer at Helsing GmbH (until 02/2024) in Munich, and a Principal Research Engineer at Huawei Technologies (until 12/2021) also in Munich. Before that I was a Research Associate at the Algorithms Group headed by Prof. Dr. Sándor Fekete at the Technische Universität Braunschweig (2015-2017), and at the Information Systems Group headed by Prof. Dr. Jens Dittrich at Saarland University (2013-2015).
You might have landed here because of academic reasons, in that case, are you sure I'm the Victor Alvarez you're looking for? Otherwise you might be interested in horseback riding lessons, or even in a comic.
If you find on this website a broken link or elements that are not being properly rendered, I would greatly appreciate if you let me know. Thanks!
In the academics
Education
2007-2012:Ph.D in Computer Science (Dr.-Ing.) at Saarland University under supervision of Prof. Dr. Raimund Seidel. 2006-2007:Master in Computer Science (M. Sc.) at Saarland University and International Max-Planck Research School For Computer Science (IMPRS) under supervision of Prof. Dr. Raimund Seidel. 2001-2005:Bachelor of Computer Science (B. Sc.) at National Autonomous University of Mexico (UNAM) under supervision of Prof. Dr. Jorge Urrutia.Research interests
My overall interests lie strongly in the design of algorithms and data structures that, if possible, are relevant in theory and practice. This development process ought to work at the amazing intersection between Theoretical and Practical Computer Science.
Thus far, my work has fallen into the following areas: Combinatorial and Algorithmic Geometry, (Algorithm) Engineering, Combinatorics, Data Structures, Parameterized Complexity, Databases, and ( NUMA-aware) Parallel Algorithms. In general, I am highly interested in areas having algorithmic flavor.
Publications
Authors of entries marked with an '*' are ordered by contribution. Otherwise authors are ordered alphabetically.
- Conferences:
[C14]Victor Alvarez, Sándor P. Fekete, Arne Schmidt. Computing Triangulations with Minimum Stabbing Number. In Proc. of the European Workshop on Computational Geometry (EuroCG '17). Malmö, Sweden, 2017.
Abstract: For a given point set $P$ or a polygon $\mathcal{P}$, we consider the problem of finding a triangulation $T$ with minimum stabbing number, i.e., a triangulation such that the maximal number of segments hit by any ray going through $T$ is minimized. We prove that this problem is NP-hard; this differs from the problem of triangulating a polygon with minimum edge weight, which is solvable in polynomial time with a simple dynamic program. In an experimental part we test various heuristics.Abstract: A conflict-free $k$-coloring of a graph assigns one of $k$ different colors to some of the vertices such that, for every vertex $v$, there is a color that is assigned to exactly one vertex among $v$ and $v$'s neighbors. Such colorings have applications in wireless networking, robotics, and geometry, and are well-studied in graph theory. Here we study the natural problem of the conflict-free chromatic number $\chi_{CF}(G)$ (the smallest $k$ for which conflict-free $k$-colorings exist), with a focus on planar graphs. For general graphs, we provide a sufficient condition for the existence of a conflict-free coloring, corresponding to a conflict-free variant of the Hadwiger Conjecture: if $G$ does not contain $K_{k+1}$ as a minor, then $\chi_{CF}(G) \leq k$. For planar graphs, we obtain a tight worst-case bound: three colors are sometimes necessary and always sufficient. In addition, we give a complete characterization of the algorithmic/computational complexity of conflict-free coloring. It is NP-complete to decide whether a planar graph has a conflict-free coloring with one color, while for outerplanar graphs, this can be decided in polynomial time. Furthermore, it is NP-complete to decide whether a planar graph has a conflict-free coloring with two colors, while for outerplanar graphs, two colors always suffice. For the bicriteria problem of minimizing the number of colored vertices subject to a given bound $k$ on the number of colors, we give a full algorithmic characterization in terms of complexity and approximation for outerplanar and planar graphs.Abstract: Upper and lower bounds for the number of geometric graphs of specific types on a given set of points in the plane have been intensively studied in recent years. For most classes of geometric graphs it is now known that point sets in convex position minimize their number. However, it is still unclear which point sets minimize the number of geometric triangulations; the so-called double circles are conjectured to be the minimizing sets. In this paper we prove that any set of $n$ points in general position in the plane has at least $\Omega(2.631^n)$ geometric triangulations. Our result improves the previously best general lower bound of $\Omega(2.43^n)$ and also covers the previously best lower bound of $\Omega(2.63^n)$ for a fixed number of extreme points. We achieve our bound by showing and combining several new results, which are of independent interest:
- Adding a point on the second convex layer of a given point set (of 7 or more points) at least doubles the number of triangulations.
- Generalized configurations of points that minimize the number of triangulations have at most $\lfloor n/2 \rfloor$ points on their convex hull.
- We provide tight lower bounds for the number of triangulations of point sets with up to 15 points. These bounds further support the double circle conjecture.
Abstract: Hashing is a solved problem. It allows us to get constant time access for lookups. Hashing is also simple. It is safe to use an arbitrary method as a black box and expect good performance, and optimizations to hashing can only improve it by a negligible delta. Why are all of the previous statements plain wrong? That is what this paper is about. In this paper we thoroughly study hashing for integer keys and carefully analyze the most common hashing methods in a five-dimensional requirements space: (1) data-distribution, (2) load factor, (3) dataset size, (4) read/write-ratio, and (5) un/successful-ratio. Each point in that design space may potentially suggest a different hashing scheme, and additionally also a different hash function. We show that a right or wrong decision in picking the right hashing scheme and hash function combination may lead to significant difference in performance. To substantiate this claim, we carefully analyze two additional dimensions: (6) five representative hashing schemes (which includes an improved variant of Robin Hood hashing), (7) four important classes of hash functions widely used today. That is, we consider 20 different combinations in total. Finally, we also provide a glimpse about the effect of table memory layout and the use of SIMD instructions. Our study clearly indicates that picking the right combination may have considerable impact on insert and lookup performance, as well as memory footprint. A major conclusion of our work is that hashing should be considered a white box before blindly using it in applications, such as query processing. Finally, we also provide a strong guideline about when to use which hashing method.Abstract: With prices of main memory constantly decreasing, people nowadays are more interested in performing their computations in main memory, and leave high I/O costs of traditional disk-based systems out of the equation. This change of paradigm, however, represents new challenges to the way data should be stored and indexed in main memory in order to be processed efficiently. Traditional data structures, like the venerable B-tree, were designed to work on disk-based systems, but they are no longer the way to go on main memory, at least not in their original form, due to the poor cache utilization of the systems they run on. Because of this, in particular, during the last decade there has been a considerable amount of research on index data structures for main-memory systems. Among the most recent and more interesting data structures for main-memory systems there is the recently-proposed adaptive radix tree ARTful (ART for short). The authors of ART presented experiments that indicate that ART was clearly a better choice over other interesting tree-based data structures like FAST and B+-trees. However, ART was not the first adaptive radix tree, the first being Judy Arrays (Judy for short) to the best of our knowledge, and a comparison between ART and Judy was not offered. Moreover, the same set of experiments indicated that only a hash table was competitive to ART. The hash table used by the authors of ART in their study was a chained hash table, but this kind of hash tables can be suboptimal in terms of space and performance due to their potentially high use of pointers. In this paper we present a thorough experimental comparison between ART, Judy, two variants of hashing via quadratic probing, and three variants of Cuckoo hashing. These hashing schemes are known to be very efficient. For our study we consider whether the data structures are to be used as a non-covering index (relying on an additional store), or as a covering index (covering key-value pairs). We consider both OLAP and OLTP scenarios. Our experiments strongly indicate that, if well-engineered, neither ART nor Judy are competitive to the aforementioned hashing schemes in terms of performance, and, in the case of ART, sometimes not even in terms of space.Abstract: Adaptive indexing is a concept that considers index creation in databases as a by-product of query processing; as opposed to traditional full index creation where the indexing effort is performed up front before answering any queries. Adaptive indexing has received a considerable amount of attention, and several algorithms have been proposed over the past few years; including a recent experimental study comparing a large number of existing methods. Until now, however, most adaptive indexing algorithms have been designed single-threaded, yet with multi-core systems already well established, the idea of designing parallel algorithms for adaptive indexing is very natural. In this regard, and to the best of our knowledge, only one parallel algorithm for adaptive indexing has recently appeared in the literature: The parallel version of standard cracking. In this paper we describe three alternative parallel algorithms for adaptive indexing, including a second variant of a parallel standard cracking algorithm. Additionally, we describe a hybrid parallel sorting algorithm, and a NUMA-aware method based on sorting. We then thoroughly compare all these algorithms experimentally. Parallel sorting algorithms serve as a realistic baseline for multi-threaded adaptive indexing techniques. In total we experimentally compare seven parallel algorithms. The initial set of experiments considered in this paper indicates that our parallel algorithms significantly improve over previously known ones. Our results also suggest that, although adaptive indexing algorithms are a good design choice in single-threaded environments, the rules change considerably in the parallel case. That is, in future highly-parallel environments, sorting algorithms could be serious alternatives to adaptive indexing.Abstract: We consider the problem of counting straight-edge triangulations of a given set $P$ of $n$ points in the plane. Until very recently it was not known whether the exact number of triangulations of $P$ can be computed asymptotically faster than by enumerating all triangulations. We now know that the number of triangulations of $P$ can be computed in $O^{*}(2^{n})$ time, which is less than the lower bound of $\Omega(2.43^{n})$ on the number of triangulations of any point set. In this paper we address the question of whether one can approximately count triangulations in sub-exponential time. We present an algorithm with sub-exponential running time and sub-exponential approximation ratio, that is, if we denote by~$\Lambda$ the output of our algorithm, and by $c^{n}$ the exact number of triangulations of $P$, for some positive constant $c$, we prove that $c^{n}\leq\Lambda\leq c^{n}\cdot 2^{o(n)}$. This is the first algorithm that in sub-exponential time computes a $(1+o(1))$-approximation of the base of the number of triangulations, more precisely, $c\leq\Lambda^{\frac{1}{n}}\leq(1 + o(1))c$. Our algorithm can be adapted to approximately count other crossing-free structures on~$P$, keeping the quality of approximation and running time intact. Our algorithm may be useful in guessing, through experiments, the right constants $c_1$ and $c_2$ such that the number of triangulations of any set of $n$ points is between $c_1^n$ and $c_2^n$. Currently there is a large gap between $c_1$ and $c_2$. We know that $c_1 \geq 2.43$ and $c_2 \leq 30$.Abstract: We study the following algorithmic problem: given $n$ points within a finite $d$-dimensional box, what is the smallest number of extra points that need to be added to ensure that every $d$-dimensional unit box is either empty, or contains at least $k$ points. We motivate the problem through an application to data privacy, namely $k$-anonymity. We show that minimizing the number of extra points to be added is strongly NP-complete, but admits a Polynomial Time Approximation Scheme (PTAS). In some sense, this is the best we can hope for, since a Fully Polynomial Time Approximation Scheme (FPTAS) is not possible, unless P=NP.Abstract: We give an algorithm that determines the number $\mbox{tr}(S)$ of straight line triangulations of a set $S$ of $n$ points in the plane in worst case time $O(n^2 2^n)$. This is the the first algorithm that is provably faster than enumeration, since $\mbox{tr}(S)$ is known to be $\Omega(2.43^n)$ for any set $S$ of $n$ points. Our algorithm requires exponential space. The algorithm generalizes to counting all triangulations of $S$ that are constrained to contain a given set of edges. It can also be used to compute an optimal triangulation of $S$ (unconstrained or constrained) for a reasonably wide class of optimality criteria (that includes e.g. minimum weight triangulations). Finally, the approach can also be used for the random generation of triangulations of $S$ according to the perfect uniform distribution. The algorithm has been implement and is substantially faster than existing methods on a variety of inputs.Abstract: Let $P$ be a set of $n$ points in the plane. A crossing-free structure on $P$ is a straight-edge planar graph with vertex set in $P$. Examples of crossing-free structures include triangulations of $P$, and spanning cycles of $P$, also known as polygonalizations of $P$, among others. There has been a large amount of research trying to bound the number of such structures. In particular, bounding the number of triangulations spanned by $P$ has received considerable attention. It is currently known that every set of $n$ points has at most $O(30^{n})$ and at least $\Omega(2.43^{n})$ triangulations. However, much less is known about the algorithmic problem of counting crossing-free structures of a given set $P$. For example, no algorithm for counting triangulations is known that, on all instances, performs faster than enumerating all triangulations. In this paper we develop a general technique for computing the number of crossing-free structures of an input set $P$. We apply the technique to obtain algorithms for computing the number of triangulations and spanning cycles of $P$. The running time of our algorithms is upper bounded by $n^{O(k)}$, where $k$ is the number of onion layers of $P$. In particular, we show that our algorithm for counting triangulations is not slower than $O(3.1414^{n})$. Given that there are several well-studied configurations of points with at least $\Omega(3.464^{n})$ triangulations, and some even with $\Omega(8^{n})$ triangulations, our algorithm is the first to asymptotically outperform any enumeration algorithm for such instances. In fact, it is widely believed that any set of $n$ points must have at least $\Omega(3.464^{n})$ triangulations. If this is true, then our algorithm is strictly sub-linear in the number of triangulations counted. We also show that our techniques are general enough to solve the restricted triangulation counting problem, which we prove to be $W[2]$-hard in the parameter $k$. This implies a "no free lunch" result: In order to be fixed-parameter tractable, our general algorithm must rely on additional properties that are specific to the considered class of structures.Abstract: Let $P\subset\mathbb{R}^{2}$ be a $k$-colored set of $n$ points in general position, where $k\geq 2$. A $k$-colored quadrangulation of $P$ is a properly colored straight-edge plane graph $G$ with vertex set $P$ such that the boundary of the unbounded face of $G$ coincides with the convex hull of $P$ and that each bounded face of $G$ is quadrilateral. It is easy to check that in general not every $k$-colored $P$ admits a $k$-colored quadrangulation, and hence the use of extra points, for which we can choose the color among the $k$ available colors, is required in order to obtain one. These extra points are known in the literature as Steiner points. In this paper, we show that if $P$ satisfies some condition for the colors of the points in the convex hull, then a $k$-colored quadrangulation of $P$ can always be constructed using less than $ \frac{(16 k-2) n+7 k-2}{39 k-6}$ Steiner points. Our upper bound improves the previous known upper bound for $k=3$, and represents the first bounds for $k\geq 4$.Abstract: Nearest Neighbor Searching, i.e. determining from a set $S$ of $n$ sites in the plane the one that is closest to a given query point $q$, is a classical problem in computational geometry. Fast theoretical solutions are known, e.g. point location in the Voronoi Diagram of $S$, or specialized structures such as so-called Delaunay hierarchies. However, practitioners tend to deem these solutions as too complicated or computationally too costly to be actually useful. Recently in ALENEX 2010 Birn et al. proposed a simple and practical randomized solution. They reported encouraging experimental results and presented a partial performance analysis. They argued that in many cases their method achieves logarithmic expected query time but they also noted that in some cases linear expected query time is incurred. They raised the question whether some variant of their approach can achieve logarithmic expected query time in all cases. The approach of Birn et al. derives its simplicity mostly from the fact that it applies only one simple type of geometric predicate: which one of two sites in $S$ is closer to the query point $q$. In this paper we show that any method for planar nearest neighbor searching that relies just on this one type of geometric predicate can be forced to make at least $n-1$ such predicate evaluations during a worst case query.Abstract: Let $P\subset\mathbb{R}^{2}$ be a set of $n$ points of which $k$ are interior points. Let us call a triangulation $T$ of $P$ even if all its vertices have even degree, and pseudo-even if at least the $k$ interior vertices have even degree. (Pseudo-)Even triangulations have one nice property; their vertices can be $3$-colored, see here for example. Since one can easily check that for some sets of points, such triangulation do not exist, we show an algorithm that constructs a set $S$ of at most $\left\lfloor\frac{k + 2}{3}\right\rfloor$ Steiner points (extra points) along with a pseudo-even triangulation $T$ of $P\cup S = V(T)$.Abstract: We study the problem of approximating $\mbox{MST}(P)$, the Euclidean minimum spanning tree of a set $P$ of $n$ points in $[0,1]^d$, by a spanning tree of some subset $Q\subset P$. We show that if the weight of $(P)$ is to be approximated, then in general $Q$ must be large. If the shape of $\mbox{MST}(P)$ is to be approximated, then this is always possible with a small $Q$. More specifically, for any $0<\varepsilon<1$ we prove:- There are sets $P\subset [0, 1]^{d}$ of arbitrarily large size $n$ with the property that any subset $Q'\subset P$ that admits a spanning tree $T'$ with $\bigl| \left|T'\right|-\left|\mbox{MST}(P)\right|\bigr| < \varepsilon\cdot\left|\mbox{MST}(P)\right|$ must have size at least $\Omega\left({n}^{1 - 1/d}\right)$. Here $|T|$ denotes the weight, i.e. the sum of the edge lengths of tree $T$.
- For any $P\subset [0,1]^d$ of size $n$ there exists a subset $Q\subseteq P$ of size $O\left(1/\varepsilon^{d}\right)$ that admits a spanning tree $T$ that is $\varepsilon$-close to $\mbox{MST}(P)$ in terms of Hausdorff distance (which measures shape dissimilarity).
- This set $Q$ and this spanning tree $T$ can be computed in time $O\left(\tau_d(n) + 1/\varepsilon^d\log\left(1/\varepsilon^d\right)\right)$ for any fixed dimension $d$. Here $\tau_d(n)$ denotes the time necessary to compute the minimum spanning tree of $n$ points in $\mathbb{R}^d$, which is known to be $O(n\log n)$ for $d=2$, $O\left((n\log n)^{4/3}\right)$ for $d=3$, and $O\left(n^{2-2/\left(\lceil d/2\rceil+1\right)+\phi}\right)$, with $\phi>0$ arbitrarily small, for $d>3$, see here.
- Journals:
[J6]Zachary Abel, Victor Alvarez, Erik D. Demaine, Sándor P. Fekete, Aman Gour, Adam Hesterberg, Phillip Keldenich, Christian Scheffer. Conflict-Free Coloring of Graphs. SIAM Journal on Discrete Mathematics. 2018. DOI=10.1137/17M1146579
Abstract:A conflict-free $k$-coloring of a graph assigns one of $k$ different colors to some of the vertices such that, for every vertex $v$, there is a color that is assigned to exactly one vertex among $v$ and $v$'s neighbors. Such colorings have applications in wireless networking, robotics, and geometry and are well studied in graph theory. Here we study the natural problem of the conflict-free chromatic number $\chi_{CF}(G)$ (the smallest $k$ for which conflict-free $k$-colorings exist). We provide results both for closed neighborhoods $N[v]$, for which a vertex $v$ is a member of its neighborhood, and for open neighborhoods $N(v)$, for which vertex $v$ is not a member of its neighborhood. For closed neighborhoods, we prove the conflict-free variant of the famous Hadwiger Conjecture: If an arbitrary graph $G$ does not contain $K_{k+1}$ as a minor, then $\chi_{CF}(G)\leq k$. For planar graphs, we obtain a tight worst-case bound: three colors are sometimes necessary and always sufficient. In addition, we give a complete characterization of the algorithmic/computational complexity of conflict-free coloring. It is NP-complete to decide whether a planar graph has a conflict-free coloring with one color, while for outerplanar graphs, this can be decided in polynomial time. Furthermore, it is NP-complete to decide whether a planar graph has a conflict-free coloring with two colors, while for outerplanar graphs, two colors always suffice. For the bicriteria problem of minimizing the number of colored vertices subject to a given bound $k$ on the number of colors, we give a full algorithmic characterization in terms of complexity and approximation for outerplanar and planar graphs. For open neighborhoods, we show that every planar bipartite graph has a conflict-free coloring with at most four colors; on the other hand, we prove that for $k\in\{1,2,3\}$, it is NP-complete to decide whether a planar bipartite graph has a conflict-free $k$-coloring. Moreover, we establish that any general planar graph has a conflict-free coloring with at most eight colors.
Abstract: Let $P$ be a set of $n$ points in the plane. A crossing-free structure on $P$ is a straight-edge plane graph with vertex set $P$. Examples of crossing-free structures include triangulations and spanning cycles, also known as polygonalizations. In recent years, there has been a large amount of research trying to bound the number of such structures; in particular, bounding the number of (crossing-free) triangulations spanned by $P$ has received considerable attention. It is currently known that every set of $n$ points has at most $O(30^{n})$ and at least $\Omega(2.43^{n})$ triangulations. However, much less is known about the algorithmic problem of counting crossing-free structures of a given set $P$. In this paper we develop a general technique for computing the number of crossing-free structures of an input set $P$. We apply the technique to obtain algorithms for computing the number of triangulations, matchings, and spanning cycles of $P$. The running time of our algorithms is upper bounded by $n^{O(k)}$, where $k$ is the number of onion layers of $P$. In particular, for $k = O(1)$ our algorithms run in polynomial time. Additionally, we show that our algorithm for counting triangulations in the worst case over all $k$ takes time $O^{*}(3.1414^{n})$ — In the notation $\Omega^{*}(\cdot), O^{*}(\cdot)$, and $\Theta^{*}(\cdot)$ we neglect polynomial terms and we just present the dominating exponential term. Given that there are several well-studied configurations of points with at least $\Omega(3.47^{n})$ triangulations, and some even with $\Omega(8.65^{n})$ triangulations, our algorithm asymptotically outperform any enumeration algorithm for such instances. We also show that our techniques are general enough to solve the Restricted-Triangulation-Counting-Problem, which we prove to be $W[2]$-hard in the parameter $k$. This implies that in order to be fixed-parameter tractable, our general algorithm must rely on additional properties that are specific to the considered class of crossing-free structures.Abstract: We consider the problem of counting straight-edge triangulations of a given set~$P$ of $n$ points in the plane. Until very recently it was not known whether the exact number of triangulations of $P$ can be computed asymptotically faster than by enumerating all triangulations. We now know that the number of triangulations of $P$ can be computed in $O^{*}(2^{n})$ time, see here, which is less than the lower bound of $\Omega(2.43^{n})$ on the number of triangulations of any point set. In this paper we address the question of whether one can approximately count triangulations in sub-exponential time. We present an algorithm with sub-exponential running time and sub-exponential approximation ratio, that is, denoting by $\Lambda$ the output of our algorithm and by $c^{n}$ the exact number of triangulations of $P$, for some positive constant $c$, we prove that $c^{n}\leq\Lambda\leq c^{n}\cdot 2^{o(n)}$. This is the first algorithm that in sub-exponential time computes a $(1+o(1))$-approximation of the base of the number of triangulations, more precisely, $c\leq\Lambda^{\frac{1}{n}}\leq(1 + o(1))c$. Our algorithm can be adapted to approximately count other crossing-free structures on $P$, keeping the quality of approximation and running time intact. In this paper we show how to do this for matchings and spanning trees.Abstract: Let $P\subset\mathbb{R}^{2}$ be a set of $n$ points, of which $k$ lie in the interior of the convex hull $\text{CH}(P)$ of $P$. Let us call a triangulation $T$ of $P$ even (odd) if and only if all its vertices have even (odd) degree, and pseudo-even (pseudo-odd) if at least the $k$ interior vertices have even (odd) degree. On the one hand, triangulations having all its interior vertices of even degree have one nice property; their vertices can be 3-colored, see here for example. On the other hand, odd triangulations have recently found an application in the colored version of the classic "Happy Ending Problem" of Erdős and Szekeres, see here. In this paper we show that there are sets of points that admit neither pseudo-even nor pseudo-odd triangulations. Nevertheless, we show how to construct a set of Steiner points $S = S(P)$ of size at most $\frac{k}{3} + c$, where $c$ is a positive constant, such that a pseudo-even (pseudo-odd) triangulation can be constructed on $P\cup S$. Moreover, we also show that even (odd) triangulations can always be constructed using at most $\frac{n}{3} + c$ Steiner points, where again $c$ is a positive constant. Our constructions have the property that every Steiner point lies in the interior of $\text{CH}(P)$.Abstract: We study the problem of approximating $\mbox{MST}(P)$, the Euclidean minimum spanning tree of a set $P$ of $n$ points in $[0,1]^d$, by a spanning tree of some subset $Q\subset P$. We show that if the weight of $(P)$ is to be approximated, then in general $Q$ must be large. If the shape of $\mbox{MST}(P)$ is to be approximated, then this is always possible with a small $Q$. More specifically, for any $0<\varepsilon<1$ we prove:- There are sets $P\subset [0, 1]^{d}$ of arbitrarily large size $n$ with the property that any subset $Q'\subset P$ that admits a spanning tree $T'$ with $\bigl| \left|T'\right|-\left|\mbox{MST}(P)\right|\bigr| < \varepsilon\cdot\left|\mbox{MST}(P)\right|$ must have size at least $\Omega\left({n}^{1 - 1/d}\right)$. Here $\left|T\right|$ denotes the weight, i.e. the sum of the edge lengths of tree $T$.
- For any $P\subset [0,1]^d$ of size $n$ there exists a subset $Q\subseteq P$ of size $O\left(1/\varepsilon^{d}\right)$ that admits a spanning tree $T$ that is $\varepsilon$-close to $\mbox{MST}(P)$ in terms of Hausdorff distance (which measures shape dissimilarity).
- This set $Q$ and this spanning tree $T$ can be computed in time $O\left(\tau_{d,p}(n) + 1/\varepsilon^d\log\left(1/\varepsilon^d\right)\right)$ for any fixed dimension $d$. Here $\tau_{d,p}(n)$ denotes the time necessary to compute the minimum weight spanning tree of $n$ points in $\mathbb{R}^d$ under any fixed metric $L_p,\ 1\leq p\leq\infty$, where $\tau_{2,p}(n) = O(n\log n)$, see here, $\tau_{3,2}(n) = O\left((n\log n)^{4/3}\right)$, and $\tau_{d, 2}(n) = O\left(n^{2-2/\left(\lceil d/2\rceil+1\right)+\phi}\right)$, with $\phi>0$ arbitrarily small, for $d>3$, see here. Also $\tau_{3,1}(n)$ and $\tau_{3,\infty}(n)$ is known to be $O(n\log n)$, see here.
Abstract: Let $P$ be a $k$-colored point set in general position, $k\geq 2$. A family of quadrilaterals with disjoint interiors $Q_{1},\ldots, Q_{m}$ is called a quadrangulation of $P$ if $V(Q_{1})\cup\cdots V(Q_{m}) = P$, the edges of all $Q_{i}$ join points with different colors, and $Q_{1}\cup\cdots\cup Q_{m} = Conv(P)$. In general it is easy to see that not all $k$-colored point sets admit a quadrangulation; when they do, we call them quadrangulatable. For a point set to be quadrangulatable it must satisfy that its convex hull $Conv(P)$ has an even number of points and that consecutive vertices of $Conv(P)$ receive different colors. This will be assumed from now on. In this paper, we study the following type of questions: Let $P$ be a $k$-colored point set. How many Steiner points in the interior of $Conv(P)$ do we need to add to $P$ to make it quadrangulatable? When $k$ = 2, we usually call $P$ a bichromatic point set, and its color classes are usually denoted by $R$ and $B$, i.e. the red and blue elements of $P$. In this paper, we prove that any bichromatic point set $P = R\cup B$ where $|R| = |B| = n$ can be made quadrangulatable by adding at most $\left\lfloor\frac{n - 1}{3}\right\rfloor + \left\lfloor\frac{n}{2}\right\rfloor + 1$ Steiner points, and that $\frac{m}{3}$ Steiner points are occasionally necessary. To prove the latter, we also show that the convex hull of any monochromatic point set $P$ of $n$ elements can be always partitioned into a set $S = \{S_{1}, \ldots, S_{t}\}$ of star-shaped polygons with disjoint interiors, where $V(S_{1})\cup\cdots V(S_{t}) = P$, and $t\leq\left\lfloor\frac{n−1}{3}\right\rfloor + 1$. For $n = 3k$ this bound is tight. Finally, we prove that there are 3-colored point sets that cannot be completed to 3-quadrangulatable point sets. - Other manuscripts:
[M5]Zachary Abel, Victor Alvarez, Erik D. Demaine, Sándor P. Fekete, Aman Gour, Adam Hesterberg, Phillip Keldenich, Christian Scheffer. Three Colors Suffice: Conflict-Free Coloring of Planar Graphs. Computing Research Repository (CoRR), 2017. abs/1701.05999. This is the unpolished version of [C13].
Abstract: A conflict-free $k$-coloring of a graph assigns one of $k$ different colors to some of the vertices such that, for every vertex $v$, there is a color that is assigned to exactly one vertex among $v$ and $v$'s neighbors. Such colorings have applications in wireless networking, robotics, and geometry, and are well-studied in graph theory. Here we study the natural problem of the conflict-free chromatic number $\chi_{CF}(G)$ (the smallest $k$ for which conflict-free $k$-colorings exist), with a focus on planar graphs. For general graphs, we provide a sufficient condition for the existence of a conflict-free coloring, corresponding to a conflict-free variant of the Hadwiger Conjecture: if $G$ does not contain $K_{k+1}$ as a minor, then $\chi_{CF}(G) \leq k$. For planar graphs, we obtain a tight worst-case bound: three colors are sometimes necessary and always sufficient. In addition, we give a complete characterization of the algorithmic/computational complexity of conflict-free coloring. It is NP-complete to decide whether a planar graph has a conflict-free coloring with one color, while for outerplanar graphs, this can be decided in polynomial time. Furthermore, it is NP-complete to decide whether a planar graph has a conflict-free coloring with two colors, while for outerplanar graphs, two colors always suffice. For the bicriteria problem of minimizing the number of colored vertices subject to a given bound $k$ on the number of colors, we give a full algorithmic characterization in terms of complexity and approximation for outerplanar and planar graphs.Abstract: Adaptive indexing is a concept that considers index creation in databases as a by-product of query processing; as opposed to traditional full index creation where the indexing effort is performed up front before answering any queries. Adaptive indexing has received a considerable amount of attention, and several algorithms have been proposed over the past few years; including a recent experimental study comparing a large number of existing methods. Until now, however, most adaptive indexing algorithms have been designed single-threaded, yet with multi-core systems already well established, the idea of designing parallel algorithms for adaptive indexing is very natural. In this regard only one parallel algorithm for adaptive indexing has recently appeared in the literature: The parallel version of standard cracking. In this paper we describe three alternative parallel algorithms for adaptive indexing, including a second variant of a parallel standard cracking algorithm. Additionally, we describe a hybrid parallel sorting algorithm, and a NUMA-aware method based on sorting. We then thoroughly compare all these algorithms experimentally; along a variant of a recently published parallel version of radix sort. Parallel sorting algorithms serve as a realistic baseline for multi-threaded adaptive indexing techniques. In total we experimentally compare seven parallel algorithms. Additionally, we extensively profile all considered algorithms. The initial set of experiments considered in this paper indicates that our parallel algorithms significantly improve over previously known ones. Our results suggest that, although adaptive indexing algorithms are a good design choice in single-threaded environments, the rules change considerably in the parallel case. That is, in future highly-parallel environments, sorting algorithms could be serious alternatives to adaptive indexing.Abstract: We consider the problem of counting straight-edge triangulations of a given set $P$ of $n$ points in the plane. Until very recently it was not known whether the exact number of triangulations of $P$ can be computed asymptotically faster than by enumerating all triangulations. We now know that the number of triangulations of $P$ can be computed in $O^{*}(2^{n})$ time AS13, which is less than the lower bound of $\Omega(2.43^{n})$ on the number of triangulations of any point set SSW11. In this paper we address the question of whether one can approximately count triangulations in sub-exponential time. We present an algorithm with sub-exponential running time and sub-exponential approximation ratio, that is, denoting by $\Lambda$ the output of our algorithm, and by $c^{n}$ the exact number of triangulations of $P$, for some positive constant $c$, we prove that $c^{n}\leq\Lambda\leq c^{n}\cdot 2^{o(n)}$. This is the first algorithm that in sub-exponential time computes a $(1+o(1))$-approximation of the base of the number of triangulations, more precisely, $c\leq\Lambda^{\frac{1}{n}}\leq(1 + o(1))c$. Our algorithm can be adapted to approximately count other crossing-free structures on $P$, keeping the quality of approximation and running time intact. In this paper we show how to do this for matchings and spanning trees.Abstract: Let $P$ be a set of $n$ points in the plane. A crossing-free structure on $P$ is a straight-edge planar graph with vertex set in $P$. Examples of crossing-free structures include triangulations of $P$, and spanning cycles of $P$, also known as polygonalizations of $P$, among others. There has been a large amount of research trying to bound the number of such structures. In particular, bounding the number of triangulations spanned by $P$ has received considerable attention. It is currently known that every set of $n$ points has at most $O(30^{n})$ and at least $\Omega(2.43^{n})$ triangulations. However, much less is known about the algorithmic problem of counting crossing-free structures of a given set $P$. For example, no algorithm for counting triangulations is known that, on all instances, performs faster than enumerating all triangulations. In this paper we develop a general technique for computing the number of crossing-free structures of an input set $P$. We apply the technique to obtain algorithms for computing the number of triangulations and spanning cycles of $P$. The running time of our algorithms is upper bounded by $n^{O(k)}$, where $k$ is the number of onion layers of $P$. In particular, we show that our algorithm for counting triangulations is not slower than $O(3.1414^{n})$. Given that there are several well-studied configurations of points with at least $\Omega(3.464^{n})$ triangulations, and some even with $\Omega(8^{n})$ triangulations, our algorithm is the first to asymptotically outperform any enumeration algorithm for such instances. In fact, it is widely believed that any set of $n$ points must have at least $\Omega(3.464^{n})$ triangulations. If this is true, then our algorithm is strictly sub-linear in the number of triangulations counted. We show experiments comparing our algorithm for counting triangulations with the algorithm presented here, which is supposed to be very fast in practice. We also show that our techniques are general enough to solve the restricted triangulation counting problem, which we prove to be $W[2]$-hard in the parameter $k$. This implies a "no free lunch" result: In order to be fixed-parameter tractable, our general algorithm must rely on additional properties that are specific to the considered class of structures.Abstract: Let $P\subset\mathbb{R}^{2}$ be a set of $n$ points. In A99 and ARSS03 an algorithm for counting triangulations and pseudo-triangulations of $P$, respectively, is shown. Both algorithms are based on the divide-and-conquer paradigm, and both work by finding sub-structures on triangulations and pseudo-triangulations that allow the problems to be split. These sub-structures are called triangulation paths for triangulations, or T-paths for short, and zig-zag paths for pseudo-triangulations, or PT-paths for short. Those two algorithms have turned out to be very difficult to analyze, to the point that no good analysis of their running time has been presented so far. The interesting thing about those algorithms, besides their simplicity, is that they experimentally indicate that counting can be done significantly faster than enumeration. In this paper we show two new algorithms, one to compute the number of triangulations of $P$, and one to compute the number of pseudo-triangulations of $P$. They are also based on T-paths and PT-paths respectively, but use the sweep line paradigm and not divide-and-conquer. The important thing about our algorithms is that they admit a good analysis of their running times. We will show that our algorithms run in time $O^{*}(t(P))$ and $O^{*}(pt(P))$ respectively, where $t(P)$ and $pt(P)$ is the largest number of T-paths and PT-paths, respectively, that the algorithms encounter during their execution. Moreover, we show that $t(P) = O^{*}(9^{n})$, which is the first non-trivial bound on $t(P)$ to be known. While the algorithm for counting triangulations of ABCR12 is faster in the worst case, $O^{*}\left(3.1414^{n}\right)$, than our algorithm, $O^{*}\left(9^{n}\right)$, there are sets of points where the number of T-paths is $O(2^{n})$. In such cases our algorithm may be faster. Furthermore, it is not clear whether the algorithm presented in ABCR12 can be modified to count pseudo-triangulations so that its running time remains $O^{*}(c^n)$ for some small constant $c\in\mathbb{R}$. Therefore, for counting pseudo-triangulations (and possibly other similar structures) our approach seems better.
Awards
- Best paper award at the 29th Symposium on Computational Geometry (SoCG '13), Rio de Janeiro, Brazil.
Recent lectures
- Introduction to Algorithm Engineering (in german) at TU Braunschweig in winter semester 2016/2017.
- Algorithm Engineering at TU Braunschweig in summer semester 2016.
- Computational Geometry at TU Braunschweig in winter semester 2015/2016.
Before imparting my own lectures I was a teaching assistant in the following lectures:
- Seminar Algorithmik (in german and english) given by Prof. Dr. Sándor Fekete at TU Braunschweig in summer semester 2015.
- Approximation Algorithms given by Prof. Dr. Sándor Fekete at TU Braunschweig in summer semester 2015.
- Current Topics in Big Data Management given by Prof. Dr. Jens Dittrich at Saarland University in winter semester 2013/2014.
- Algorithms and Data Structures given by Prof. Dr. Raimund Seidel at Saarland University in winter semester 2012/2013.
- Das Buch der Beweise (in german) given by Prof. Dr. Raimund Seidel at Saarland University in winter semester 2012/2013.
- Perlen der Theoretischen Informatik (in german) given by Prof. Dr. Raimund Seidel at Saarland University in summer semester 2012.
- Algorithms and Data Structures (Intensive Summer Course) given by Prof. Dr. Raimund Seidel and Dr. Jiong Guo at Saarland University in summer semester 2010.
- Allerlei Algorithmen (in german) given by Prof. Dr. Raimund Seidel at Saarland University in summer semester 2010.
- Probleme in der diskreten kombinatorischen Geometrie (in german) given by Prof. Dr. Raimund Seidel at Saarland University in summer semester 2009.
- Algorithms and Data Structures (Intensive Summer Course) given by Prof. Dr. Raimund Seidel and Prof. Dr. Kurt Mehlhorn at Saarland University in summer semester 2008.
Other professional activities
I have been a reviewer for the following conferences and journals: CCCG, SoCG, VLDB, ICDE, IJCGA, SODA, SIGMOD, ICALP