October 25, 2018

Thomas Weise (Hefei University )
Automating Scientific Work in Optimization
Show abstract
In the fields of heuristic optimization and machine learning, experimentation is the way to assess the performance of an algorithm setup and the hardness of problems. Good experimentation is complicated. Most algorithms in the domain are anytime algorithms, meaning they can improve their approximation quality over time. This means that one algorithm may initially perform better than another one but converge to worse solutions in the end. Instead of single final results, the whole runtime behavior of algorithms needs to be compared (and runtime may be measured in multiple ways). We do not just want to know which algorithm performs best and which problem is the hardest ― a researcher wants to know why. We introduce a process which can 1) automatically model the progress of algorithm setups on different problem instances based on data collected in experiments, 2) use these models to discover clusters of algorithm (or problem instance) behaviors, and 3) propose reasons why a certain algorithm setup (or problem instance) belongs to a certain algorithm (or problem instance) behavior cluster. These highlevel conclusions are presented in form of decision trees relating algorithm parameters (or instance features) to cluster ids. We emphasize the duality of analyzing algorithm setups and problem instances. Our process is implemented as open source software and tested in two case studies, on the Maximum Satisfiability Problem and the Traveling Salesman Problem. Besides its basic application to raw experimental data, yielding clusters and explanations of “quantitative” algorithm behavior, our process also allows for “qualitative” conclusions by feeding it with data which is normalized with problem features or algorithm parameters. It can also be applied recursively, e.g., to further investigate the behavior of the algorithms in the cluster with the bestperforming setups on the problem instances belonging to the cluster of hardest instances. Both use cases are investigated in the case studies.

October 18, 2018 
Martin Böhm (University of Bremen)
Online Bin Stretching: Algorithms and Computer Lower Bounds
Show abstract
We investigate a problem in semionline algorithm design, called Online Bin
Stretching. The problem can be understood as an online repacking problem: The
goal of the algorithm is to repack items of various sizes into m containers of
identical size R>1. The input items arrive one by one and the algorithm must
assign an item to a container before the next item arrives.
A speciality of this problem is that there is a specific guarantee made to the
algorithm: The algorithm learns at the start of the input that there exists a
packing of all input items into m containers of capacity 1.
Our goal is to design algorithms for this problem which successfully pack the
entire incoming sequence one by one while requiring the lowest container
capacity R possible.
In this talk, we discuss recent results about Online Bin Stretching which were
included in the speaker's PhD thesis:
First, we design an algorithm that is able to pack the entire input into m
containers of capacity 1.5 regardless of what the value of m will be. Second,
we show a specialized algorithm for the setting of just 3 containers; this
algorithm is able to pack into 3 bins of capacity 1.375. Finally, we design and
implement an involved search algorithm which is able to find lower bounds for
Online Bin Stretching  and in fact we show the best known lower bounds for
3 ≤ m ≤ 8.

September 11, 2018 
Leen Stougie (CWI Amsterdam)
A Decomposition Theory For Vertex Enumeration of Convex Polyhedra
Show abstract
In the last years the vertex enumeration problem of polyhedra has seen a revival in the study of metabolic networks, which increased the demand for efficient vertex enumeration algorithms for highdimensional polyhedra given by equalities and inequalities. The complexity of enumeration of vertices of polytopes (bounded polyhedral) is a famous open problem in discrete and computational geometry.
In this lecture we do not solve this problem, but present a type of fixed parameter total polynomial time result.
We apply the concept of branchdecomposition to the vertex enumeration problem of polyhedra \(P = \{x : Ax = b; x \geq 0\}\). For that purpose, we introduce the concept of \(k\)module and show how it relates to the separators of the linear matroid generated by the columns of \(A\). This then translates structural properties of the matroidal branchdecomposition to the context of polyhedra. We then use this to present a total polynomial time algorithm for enumerating all vertices of polytopes \(P\) for which the branchwidth of the linear matroid generated by the columns of \(A\) is bounded by a constant \(k\). If time permits we will show a truly fixed parameter tractable result, with time being an exponential function of the parameters only times a polynomial function of the input/output size, if on top of bounded branchwidth we also assume that all square submatrices of \(A\) have bounded determinant.
This paper is joint work with Arne Reimers.

September 10, 2018 
Klaus Heeger (University of Bonn)
A 10/7Approximation for 2Connected Spanning Subgraphs
Show abstract
We consider the 2vertexconnected spanning subgraph problem. This problem asks, given a 2vertexconnected graph, to find a 2vertexconnected spanning subgraph with minimum number of edges. This classical network optimization problem is known to be APXhard. We give a 10/7approximation algorithm for this problem, improving on the 3/2approximation from Garg, Vempala and Singla. The algorithm is similar to the 17/12approximation algorithm for the related 2egdeconnected spanning subgraph problem, and works by carefully modifying an open eardecomposition with minimum number of even ears.

September 3, 2018 
Franziska Eberle (University of Bremen)
New Results on Throughput Maximization
Show abstract
We study a fundamental online job admission
problem where jobs with processing times and deadlines arrive online
over time at their release dates, and the task is to determine a
preemptive singleserver schedule which maximizes the number of
completed jobs. To circumvent known impossibility results, we make
a standard slackness assumption by which the feasible time window
for scheduling a job is at least \(1+\varepsilon\) times its processing time,
for some \(\varepsilon>0\). We analyze the competitive ratio of algorithms
depending on the slackness parameter \(\varepsilon\).
In this paper we quantify the impact that different provider
commitment requirements have on the performance of online
algorithms. Studying this impact is important, for example, in modern
cloudservices, where customers do not want lastminute rejections
of critical tasks and request an earlyenough providerside commitment to
completing admitted jobs. Our main contribution is
one universal algorithmic framework for online job admission both with and
without commitments. For scheduling without commitment, we prove that
our algorithm with a competitive ratio of \(O(\frac{1}{\varepsilon})\) is the
best possible (deterministic) for this problem. For different
commitment models, we give the first nontrivial performance bounds.
In the setting where commitment
decisions must be made before a job's slack becomes less than a \(\delta\)fraction of its size, we prove
a competitive ratio of \(O(\varepsilon \cdot(( \varepsilon 
\delta)\delta^2 )^{1})\), for \(0<\delta<\varepsilon \). When a provider must commit
upon starting a job, our bound is \(O(\varepsilon ^{2})\). A stronger
commitment model, where a provider has to commit at job arrival, does
not admit any bounded competitive ratio even for randomized
algorithms. Finally, we observe that for scheduling with
commitment the restriction to the ``unweighted'' throughput model is
essential; if jobs have individual weights, we rule out any bounded
competitive ratio (for deterministic algorithms).

July 17, 2018 
Adrian Fessel (University of Bremen)
Topological Dynamics of Physarum polycephalum Transport Networks
Show abstract
Network topology, as well as dynamic processes such as flow, synchronization of oscillators, or wave activity that take place on a given network topology, have been extensively studied. In selforganizing networks, topology and dynamic processes interact in a unique way: the network adapts its structure based on local activity, enabling the emergence of global dynamic properties on an optimized topology.
Working with Physarum polycephalum as an experimentally accessible model for an adaptive transportation system, we seek to formalize topological development into a sequence of discrete events intended to serve as basis for studying the interaction of flow and topology. We focus on reorganization of P. polycephalum networks after fragmentation, a process occurring in a prominent percolation transition. Unlike in most traditional percolation processes, system size in P. polycephalum is not fixed and requires a model that allows for dynamical changes in size.
The theoretical description follows a master equation with parameters obtained from statistical analysis of experimental data. Computational investigation of the model recreates the dynamics of the topological phase transition and enables characterization of finite size effects and critical exponents. Comparing the influences of system growth and fusion within the system reveals a significant shift of the transition when system growth dominates.

July 10, 2018 
Sigrid Knust (Osnabrück University)
Synchronous flow shop scheduling problems
Show abstract
A synchronous flow shop is a variant of a nonpreemptive permutation flow
shop where transfers of jobs from one machine to the next take place at
the same time. The processing is organized in synchronized cycles which
means that in a cycle all current jobs start at the same time on the
corresponding machines. Then all jobs are processed and have to wait
until the last one is finished. Afterwards, all jobs are moved to the
next machine simultaneously. As a consequence, the processing time of a
cycle is determined by the maximum processing time of the operations
contained in it. Furthermore, only permutation schedules are feasible, i.e.,
the jobs have to be processed in the same order on all machines.
The goal is to find a permutation of the jobs such that the makespan is
minimized.
Motivated by a practical application, we investigate special cases where the
processing times of the cycles are determined by a subset of socalled
dominating machines. Furthermore, we consider problems with additional
resources and setup times.

May 15, 2018 
Bertrand Simon (ENS de Lyon)
Task graph scheduling on modern computing platforms
Show abstract
With the increasing complexity of modern computing platforms, many scientific
applications now rely on runtime schedulers. Most of these frameworks model an
application as a directed acyclic graph of tasks, where nodes represent tasks
and edges represent dependences between tasks. In my thesis, I focused on three
theoretical challenges concerning task graph scheduling: exploiting the inherent
parallelism of tasks, efficiently using accelerators, and coping with a limited
memory.
In this talk, I will detail two projects lead in this context. The first one
targets a platform composed of two types of processors, such as CPUs and GPUs.
The processing time of each task depends on the type of processors it is
allocated to; this decision being one the main scheduling choices. We study the
makespan minimization problem in an online setting, where the graph is gradually
discovered, which increases the difficulty of the decision. In a second project,
we focus on reducing the memory consumption of an application in order to avoid
expensive memory transfers. Specifically, we aim at preventing dynamic
schedulers from running out of memory on a sharedmemory platform. These
schedulers generally start multiple tasks in parallel, which may lead to a
memory shortage. We study how to modify the task graph in order to restrict this
parallelism while maintaining performance.

April 24, 2018 
Viktor Bindewald (TU Dortmund)
Robust Bipartite Matchings  Hedging Against Edge Failures
Show abstract
The Bipartite Matching Problem is a wellstudied combinatorial optimization problem with numerous practical applications. In reallife optimization problems are modeled using data based on forecasts and estimates which leads to uncertain problem parameters. One way to take this fact into account is provided by the framework of robust optimization.
While most robust variants of the Bipartite Matching Problem considered in the literature address cost uncertainty, this talk deals with a new robust problem, where the structure of the underlying bipartite graph is uncertain. In our problem, an additional part of the input is a list of failure scenarios described by a subset of edges of the graph that may fail simultaneously. If a scenario emerges, the corresponding edges are deleted from the graph. In the robust problem the goal is to find a minimumcost set of edges containing a perfect matching in every scenario, i.e., a robust solution survives any possible incident. We will discuss the computational complexity and present asymptotically tight approximation algorithms. Additionally we address the following related question: Is it easier to harden an existent infrastructure rather than designing a new one from scratch? In our setting this results in the question how to robustify a given perfect matching by adding new edges to the ambient graph. For this problem we will present hardness and algorithmic results.

April 10, 2018 
Franziska Eberle (University of Bremen)
On Index Policies in Stochastic Scheduling
Show abstract
We address a fundamental stochastic scheduling problem where jobs with uncertain processing times must be scheduled nonpreemptively on \(m\) identical parallel machines. We are given a set \(J\) of \(n\) jobs, where each job \(j\in J\) is modeled by a random variable \(P_j\) with known distribution. The actual realization of a processing time becomes known only by executing a job. The goal is to find a policy \(\Pi\) that decides for any point in time which jobs to schedule such as to minimize the expected total completion time, \(\sum_{j\in J} \mathbb{E}[C_j^\Pi]\), where \(C_j\Pi\) denotes the completion time of job \(j\) under policy \(\Pi\). The scheduling problem can be stated in the standard threefield notation as \(P  \mathbb{E}[\sum_j C_j]\).
We consider socalled index policies that assign the same priority to jobs with the same probability distribution and schedule jobs one after the other on the first machine that becomes available. Jobbased index policies do not consider the number of jobs or the number of machines. We rule out distributionindependent approximation factors for jobbased index policies. Somewhat surprisingly this lower bound is obtained already for very simple instances with only two types of jobs, identical deterministic jobs and a set of stochastic jobs that all follow the same Bernoulli distribution. For this class of instances we also give a policy that is an \(\mathcal{O}(m)\)approximation.
This talk is based on work together with Felix Fischer, Nicole Megow, and Jannik Matuschke.

March 15, 2018 
Katrin Casel (Trier University)
Resolving Conflicts for LowerBounded Clustering
Show abstract
For most clustering problems, the quality of a solution is usually assessed with respect to a given pairwise distance on the input objects. Approximate solutions for such tasks often rely on this distance to be a metric. But what happens if this property does not hold? Especially for computing a clustering such that each cluster has a certain minimum cardinality k>2 (as required for anonymisation or balanced facility location problems) this becomes problematic. For example, with the objectives to minimise the maximum radius or diameter of all clusters, there exists no polynomialtime constant factor approximation if the triangle inequality is violated while there exists a 2approximation for metric distances. This talk introduces methods to resolve or at least soften this effect of nonmetric distances by devising particular strategies to deal with violations of the triangle inequality (conflicts). With the use of parameterised algorithmics it turns out that, if the number of such conflicts is not too large, constant factor approximations can still be computed efficiently.

December 19, 2017 
Ruben Hoeksma (University of Bremen)
A PTAS for TSP on hyperplanes
Show abstract
TSP with neighborhoods is a longstudied problem where we are asked to find the shortest tour visiting a given set of regions (=neighborhoods).
When the neighborhoods are lines in two dimensions, the problem is efficiently solvable. For lines in three dimensions, the problem is NPhard and this is generally true in higher dimensions with regions that are low dimensional subspaces.
The question we try to answer in this talk concerns the setting where the regions are hyperplanes in three or more dimensions. The complexity of this problem is open (no hardness results are known) and, until now, only a recent \((2^d)\)approximation algorithm was known. In this talk, we make a step forward in answering this sixteen year old open question by designing a PTAS.
In a highlevel view, we will see the underlying ideas, including a (new) sparsification technique for polytopes and a linear program that computes the shortest tour.
This talk is based on work together with Antonios Antoniadis, Krzysztof Fleszar, and Kevin Schewior.

December 12, 2017

Syamantak Das (University of Bremen)
Surviving in LowTreewidth Graphs
Show abstract
In the Survivable Network Design Problem (SNDP), we are given an undirected graph \(G\) together with costs on edges or vertices and the demand function \(k\) on sourcesink demand pairs \((s_i, t_i, k_i)\), where each \({s_i, t_i}\) are vertices in the graph and \(k_i\) is a natural number.
In the edgeconnectivity setting (ECSNDP), we are interested in the minimum cost subgraph \(H\) of \(G\) in which every demand pair \((s_i, t_i)\) is connected via \(k_i\) edgedisjoint paths. These problems are natural generalizations of Steiner tree and Steiner forest, which capture the faulttolerant requirements in networks.
In general, these problems are NPhard, and even restrictive special cases of SNDP post nontrivial challenges.
As such, SNDP has drawn a lot of attention from researchers for the past decades.
Special cases of SNDP, like Steiner Tree, Steiner Forest, \(2\)Edge connected subgraph etc. has been studied extensively on many special classes of graphs, e.g. planar graphs, Euclidean metrics, and bounded treewidth graphs.
In this talk, we shall discuss some recent results on exact algorithms for the general SNDP parameterized by the treewidth of the graph. In particular, we shall demonstrate a fairly general dynamic programming framework on the basis of the treedecomposition of the graph which can be applied to solve a bunch of related network design problems.

December 5, 2017

Nicole Megow (University of Bremen)
Banff 2017: Approximation Algorithms and the Hardness of Approximation

November 29, 2017

Thomas Schneider (University of Bremen)
The Taming of the Expressive: Foundations for Efficient Reasoning in Complex Modal and Description Logics
Show abstract
Modal logic and its syntactic variants are widely used in knowledge representation and verification. Applications in those areas require modal logics with a careful balance of the rivalling requirements "high expressivity" and "low computational complexity". The talk will give an overview of our contributions to "taming" expressive temporal, description, and hybrid logics, which consist in identifying fragments with good computational complexity and studying modularity. These contributions fall into three groups: (1) systematic studies of the complexity of subBoolean fragments of expressive modal and description logics, (2) theoretic and practical studies of module extraction and modularisation of ontologies, and (3) a study of the complexity of lightweight combinations of temporal and description logics.
The talk will be accessible to a general audience with a background in (theoretical) computer science.

November 21, 2017 
Rolf Drechsler (University of Bremen)
Design and Verification of CyberPhysical Systems – Challenges and Recent Developments
Show abstract
New challenges need to be faced during modeling, design, implementation, verification and test of
Cyber‐Physical Systems (CPS). For the design of hardware and software in this context, new
approaches need to be developed, taking into account also non‐functional requirements. This talk
gives an overview on challenges and recent developments with a special focus on verification and
correctness.

October 24, 2017 
Tobias Mömke (University of Bremen)
Maximum Scatter TSP in Doubling Metrics
Show abstract
We study the problem of finding a tour of \(n\) points in which every edge is long.
More precisely, we wish to find a tour that visits every point exactly once, maximizing the length of the shortest edge in the tour.
The problem is known as Maximum Scatter TSP, and was introduced by Arkin et al. (SODA 1997), motivated by applications in manufacturing and medical imaging.
Arkin et al. gave a \(0.5\)approximation for the metric version of the problem and showed that this is the best possible ratio achievable in polynomial time (assuming \(P \neq NP\)).
Arkin et al. raised the question of whether a better approximation ratio can be obtained in the Euclidean plane.
We answer this question in the affirmative in a more general setting, by giving
a \((1\varepsilon)\)approximation algorithm for \(d\)dimensional doubling metrics, with running time \( \tilde{O}(n^3 + 2^{O(K \log (K))}) \), where \(K \leq (13/\varepsilon)^d\).
As a corollary we obtain (i) an efficient polynomialtime approximation scheme (EPTAS) for all constant dimensions \(d\), (ii) a polynomialtime approximation scheme (PTAS) for dimension \(d = (\log \log (n))/c\), for a sufficiently large constant \(c\), and (iii) a PTAS for constant \(d\) and \(\varepsilon = \Omega(1/\log \log (n))\). Furthermore, we show the dependence on \(d\) in our approximation scheme to be essentially optimal, unless Satisfiability can be solved in subexponential time.

October 17, 2017 
Ruben Hoeksma (University of Bremen)
Optimal Threshold Strategies and Posted Price Mechanisms for Random Arrivals
Show abstract
In this talk, we discuss the following setting. A gambler is presented with a set of random variables that come to him in a random order. When the variable arrives, the gambler sees its value and decides to take it or go on to the next. If the gambler takes it, he gets an amount of money equal to the value of the variable and the game ends, otherwise he is presented the next random variable. The gamblers aim is to maximize the expected amount of money he receives. This setting is a classic example of an optimal stopping problem and is known as the Prophet Secretary Problem.
For this problem we analyze two different threshold strategies. One nonadaptive strategy, where the thresholds are chosen before any of the variables arrive and before the order is known, and one adaptive strategy, where thresholds are chosen based on the variables that have been rejected prior.
For the nonadaptive strategy we show that the gambler can, in expectation, get at least \(11/e\) times the expected maximum value among the random variables (i.e. the value a prophet would receive). We show that no nonadaptive strategy can do better, even for i.i.d. random variable. For the adaptive strategy, we analyze only the i.i.d. setting and show that this improves the gamblers expectation to \(0.745\) times the expected maximum. This results settles a conjecture by Kertz from 1986.
Finally, we discuss the implications for posted price mechanisms, a commonly used mechanism for selling goods. We show that the results for threshold strategies imply similar bounds for comparisons of posted price mechanisms with optimal auctions.

October 10, 2017 
Dmitry FeichtnerKozlov (University of Bremen)
Cheeger Graphs
Show abstract
Motivated by higherdimensional coboundary expansion
we define a certain family of graphs which are called Cheeger graphs.
Little is known at present about these graphs. We will make a short
selfcontained introduction into the subject and formulate
several open purely graphtheoretic problems.

August 29, 2017 
Antonios Antoniadis (MaxPlanckInstitut für Informatik)
A Tight Lower Bound for Online Convex Optimization with Switching Costs
Show abstract
We investigate online convex optimization with switching costs (OCO; Lin et al., INFOCOM 2011), a natural online problem arising when rightsizing data centers: A server initially located at \(p_0\) on the real line is presented with an online sequence of nonnegative convex functions \(f_1,f_2,\ldots ,f_n: R\rightarrow R^+\). In response to each function \(f_i\), the server moves to a new position \(p_i\) on the real line, resulting in cost \(p_ip_{i1} + (f_ip_i)\). The total cost is the sum of costs of all steps. One is interested in designing competitive algorithms.
In this talk, we solve the problem in the classical sense: We give a lower bound of \(2\) on the competitive ratio of any possibly randomized online algorithm, matching the competitive ratio of previously known deterministic online algorithms (Andrew et al., COLT 2013/arXiv 2015; Bansal et al., APPROX 2015). It has been previously conjectured that \((2\varepsilon)\)competitive algorithms for some \(\varepsilon > 0\) (Bansal et al., APPROX 2015).
Joint work with Kevin Schewior, to appear in WAOA 2017.

August 29, 2017 
Peter Kling (Universität Hamburg)
Multiprocessor Scheduling with a Shareable Resource
Show abstract
We consider \(n\) jobs that have to be scheduled on \(m\) processors. The processors
share a common, continuously divisible resource. In each discrete time step, the
scheduler can change the assignment of the resource to the processors. Jobs come
with a processing volume and a resource requirement. When a processor currently
has a share \(R\) of the resource and works on a job with processing volume \(p\)
and resource requirement \(r\), exactly \(\min\{R/r, 1\}\) units of the job's
processing volume are processed. That is, doubling the resource also doubles the
speed a job is processed with, but this speedup is capped. Migration and
preemption are not allowed, and the goal is to minimize the makespan.
In this talk, I present an \(1 + O(1/m)\)approximation algorithm if all jobs have
unit work and an \(2 + O(1/m)\)approximation algorithm for jobs with arbitrary
workload. The algorithms employ a sliding window technique and are quite simple
and intuitive. This problem generalizes bin packing with cardinality constraints
and splittable items, for which our results improve the approximation ratio
achieved by a fast and simple algorithm from previously \(2  O(1/m)\) to \(1
+ O(1/m)\).
This is joint work with Alexander Mäcker, Sören Riechers, and Alexander
Skopalik.

June 28, 2017 
Jan Hackfeld (TU Berlin)
Spaceoptimal collaborative graph exploration
Show abstract
We consider the problem of exploring an undirected and initially unknown graph by a set of cooperative agents. The vertices of the graph are
unlabeled, and the edges incident to a vertex have locally distinct labels.In this setting, it is known that for one agent Θ(log n) bits of memory are necessary and sufficient to explore any graph with at most n vertices. We show that O(log log n) cooperative agents with only constant memory can explore any graph with at most n vertices in polynomial time. We match this result with a lower bound showing that the number of agents needed for exploring any graph with at most n vertices is already Ω(log log n) when we allow each agent to have at most O(log n 1−ε ) bits of memory for some ε > 0. In particular, this implies that Ω(log log n) agents with constant memory are necessary for exploration.

June 21, 2017 
Ilya Chernykh (Novosibirsk State University)
Open Problems in Open Shop Routing

June 20, 2017 
Markus Jablonka (University of Bremen)
Oblivious Deterministic Maximization of Submodular Functions
Show abstract
Submodular functions are set functions that capture the natural
"diminishing returns" property and can be seen as a discrete analogy to convex
functions. The problem of maximizing a submodular function is as
follows: Given a set \(E\) of elements and a set function \(f: 2^E\rightarrow Q\)
(typically through an oracle), we want to determine a subset \(S\) of \(E\)
with maximum value \(f(S)\). This is an NPhard problem for which several
approximation algorithms are known. These algorithms are either
randomized or adaptive, in the sense that they choose which function
values to look up after having already seen some others, and in this
way adapt to the function. In this talk we discuss a nonadaptive,
or oblivious, deterministic approach. All sets for which the value
of the function will be queried will be chosen from the beginning as
a list. The set of this list with the highest function value will
then be returned as an approximation of the problem. The focus of
the talk will be choosing such a list and determining the quality of
this solution.

May 31, 2017 
Tim Güneysu (University of Bremen)
Challenges of Modern Cryptography

May 24, 2017 
Franziska Eberle (University of Bremen)

May 10, 2017 
Syamantak Das (University of Bremen)
Online and Approximation Algorithms for Scheduling and Network Design
Show abstract
This will be a talk covering my general research interests, approximation and online algorithms for scheduling and network design.
First, I will speak about online scheduling problems. We propose a new model on top of the classical competitive analysis that we believe is useful in dealing with uncertainty in the input data. We show that one can design simple online algorithms with good guarantees if one is allowed to reject, that is, not process a tiny fraction of the input. We propose this in the light of the fact that such an algorithm is not possible for optimizing certain classical scheduling objectives like makespan minimization or minimizing the (weighted) maximum flowtime.
Joint work with: Anamitra Roy Choudhury, Naveen Garg and Amit Kumar.
In the second part, I shall talk about network design problems. We give improved approximation guarantees for a classical network design problem called the Group Steiner Tree, when the input graph has a special structure. In particular, we prove a generic reduction of a problem instance on such graphs to a problem instance on trees that allows us to carry out the desired improvement. The reduction may be of independent interest.
Joint work with: Parinya Chalermsook, Bundit Laekhanukit and Daniel Vaz.

May 3, 2017 
Nicole Megow (University of Bremen)
Online Resource Minimization
Show abstract
Minimizing resource usage is a key to achieving economic, environmental, or societal goals. We consider the fundamental problem of minimizing the number of machines that is necessary for feasibly scheduling preemptive jobs with release dates and hard deadlines. We consider the online variant of this problem in which every job becomes known to the online algorithm only at its release date. We present new competitive algorithms for this problem. We also discuss the power of job migration and show somewhat surprising strong lower bounds on the best possible performance guarantee.
Joint work with: Lin Chen and Kevin Schewior.

March 1, 2017 
Marek Adamczyk (University of Bremen)
When the Optimum is also Blind: A New Perspective on Universal Optimization
Show abstract
Consider the following variant of the set cover problem. We are given a universe \(U=\{1,...,n\}\) and a collection of subsets \(\mathcal{C} = \{S_1,...,S_m\}\) where \(S_i \subseteq U\). For every element \(u \in U\) we need to find a set \(\phi(u) \in \mathcal C\) such that \(u\in \phi(u)\). Once we construct and fix the mapping \(\phi:U \mapsto \mathcal C\) a subset \(X \subseteq U\) of the universe is revealed, and we need to cover all elements from \(X\) with exactly \(\phi(X):=\bigcup_{u\in X} \phi(u)\). The goal is to find a mapping such that the cover \(\phi(X)\) is as cheap as possible.
This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed.
Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate such problems under the regime of worst case analysis, i.e., when we measure how good the solution is by looking at the worstcase ratio: universal solution for a given instance vs optimum solution for the same instance.
As the universal solution is significantly more constrained, it is typical that such a worstcase ratio is actually quite big. One way to give a viewpoint on the problem that would be less vulnerable to such extreme worstcases is to assume that the instance, for which we will have to create a solution, will be drawn randomly from some probability distribution. In this case one wants to minimize the expected value of the ratio: universal solution vs optimum solution. Here the bounds obtained are indeed smaller than when we compare to the worstcase ratio.
But even in this case we still compare apples to oranges as no universal solution is able to construct the optimum solution for every possible instance. What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do?
In the talk we will see that under this viewpoint, but still in the stochastic variant, we can indeed obtain better bounds than in the expected ratio model. For the Set Cover problem  on which the main ideas and characteristics of the model will be presented  we will
see how one can obtain \(H_n\) approximation, which matches the approximation ratio from the classic deterministic setup.
Moreover, this result holds for all possible probability distributions over \(U\) that have a polynomially large carrier, while all previous results pertained to a model in which elements were sampled independently. The result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization.
Joint work with: Fabrizio Grandoni, Stefano Leonardi and Michał Włodarczyk.

February 14, 2017 
Lukas Nölke (University of Paderborn)
Recent Developments in NowhereZero Flow Problems

December 21, 2016 
Felix Fischer (University of Glasgow)
Truthful Outcomes from NonTruthful Position Auctions
Show abstract
The area of mechanism design is concerned with the development of algorithms that produce good outcomes given inputs from selfinterested individuals. Much of mechanism design theory is dominated by the VickreyClarkeGroves (VCG) mechanism, which makes it optimal for each individual to provide its true input. In the real world the VCG mechanism is used with surprising rarity, and some reasons for this are known.
We identify another property of the mechanism that may be problematic in practice, a relative lack of robustness to inaccuracies in the choice of its parameters.
Joint work with Paul Dütting and David C. Parkes.

December 7, 2016 
Andreas Tönnis (University of Bonn)
Online Algorithms with Random Order

November 29, 2016 
Syamantak Das (University of Bremen)
A Colorful Routing Problem on Tree Networks
