Reading Group

  • Time and Place: Unless otherwise noted, we meet on Fridays at 16:15h in INJ 114. 
  • Mailing List: To subsribe to this list, send an email (can be empty) to:


October 27th, 2017, 15:45h INJ 114

Manuel Aprile,2-level polytopes and their extension complexity

Abstract: I will introduce 2-level polytopes and the wide open, fascinating question of their extension complexity. Then I will present the only known lower bound on the extension complexity of a 2-level polytope, more precisely the stable set polytope of some bipartite graphs coming from finite projective planes (joint work with Faenza, Fiorini, Huynh, Macchia). I will conclude with some open question and with a surprising connection to the class covering problem, introduced by Volker Kaibel.

October 20th, 2017, 15:45h INJ 114

Amir Zandieh, “On the Difference Between Closest, Furthest, and Orthogonal Pairs: Nearly-Linear vs Barely-Subquadratic Complexity in Computational Geometry”

Abstract: Some point location problems for n points in d-dimensional Euclidean space (and \ell_p generally) have only barely-subquadratic running-time solutions for d=\omega(1). For example, in the Euclidean metric, the known algorithms for finding a Furthest Pair (the diameter of the point set) are only barely-subquadratic, requiring \Omega(n^{2−1/\theta(d)}) time. This paper gives a novel exact and deterministic self-reduction for the Orthogonal Vectors problem on n vectors in \{ 0, 1 \}^d  to n vectors in Z^{O(\log d)} that runs in 2^{o(d)} time. As a consequence, barely-subquadratic problems such as Euclidean diameter, Euclidean bichromatic closest pair, ray shooting, and incidence detection do not have O(n^{2−\epsilon} ) time algorithms (in Turing models of computation) for dimensionality d = \omega( log log n)^2 , unless the popular Orthogonal Vectors Conjecture and the Strong Exponential Time Hypothesis are false. That is, while the poly-log-log-dimensional case of Closest Pair is solvable in n^{1+o(1)} time, the poly-log-log-dimensional case of Furthest Pair can encode difficult large-dimensional problems conjectured to require n^{2−o(1)}  time.

(Based on

September 29th, 2017, 15:45h INJ 114

Ashish Chiplunkar, “The Family of Prophet Inequalities

Abstract: The problem of Prophet Inequality concerns choosing the one out of an online sequence of samples from a known sequence of probability distributions, with the objective of maximizing the value of the chosen sample. Ever since Krengel and Sucheston formulated this problem in 1977, a plethora of variants have been studied. I plan to discuss the basic problem in detail, and present a survey of some of the most interesting ones (in my opinion) out of the variants.

May 19th, 2017, 16:15h INJ 114

      Aida Mousavi, “Testing Cluster Structure of Graphs

Abstract: We study the problem of recognizing the cluster structure of a graph in the framework of property testing in the bounded degree model. Given a parameter ε, a d-bounded degree graph is defined to be (k, φ)-clusterable, if it can be partitioned into no more than k parts, such that the (inner) conductance of the induced subgraph on each part is at least φ and the (outer) conductance of each part is at most cd,kε4φ2, where cd,k depends only on d,k. Our main result is a sublinear algorithm with the running time ~O(√n ⋅ poly(φ,k,1/ε)) that takes as input a graph with maximum degree bounded by d, parameters k, φ, ε, and with probability at least 2/3, accepts the graph if it is (k,φ)-clusterable and rejects the graph if it is ε-far from (k, φ*)-clusterable for φ* = c’d,kφ2 ε4/log n, where c’d,k depends only on d,k. By the lower bound of Ω(√n) on the number of queries needed for testing graph expansion, which corresponds to k=1 in our problem, our algorithm is asymptotically optimal up to polylogarithmic factors.

May 12th, 2017, 14:45h INJ 114

        Olivier Leveque, “From Wigner to wireless: three random matrices

Abstract: The focus of this talk is on the spectral properties of large random matrices that arise in the context of wireless communication networks. Such n x n matrices are typically generated from O(n) independent random variables, departing from traditional random matrix models with O(n^2) independent entries. New tools are required for the spectral analysis of these matrices.

May 5th, 2017, 16:00h INJ 114

      Jakub Tarnawski, “Matching is in Quasi-NC

Abstract: We show that the perfect matching problem in general graphs is in Quasi-NC.  That is, we give a deterministic parallel algorithm which runs in O(log^3 n) time on n^{O(\log^2 n)} processors.  The result is obtained by a derandomization of the Isolation Lemma for perfect matchings, which was introduced in the classic paper by Mulmuley, Vazirani and Vazirani [1987] to obtain a Randomized NC algorithm.

Our proof extends the framework of Fenner, Gurjar and Thierauf [2016], who proved the analogous result in the special case of bipartite graphs (this result appeared at Theory Coffee 30.09.2016). Compared to that setting, several new ingredients are needed due to the significantly more complex structure of perfect matchings in general graphs. In particular, our proof heavily relies on the laminar structure of the faces of the perfect matching polytope. I will give a gentle overview of these ingredients.

April 28th, 2017, 16:00h INJ 114

   Martin Jaggi, “A Unified Optimization View on Generalized Matching Pursuit and Frank-Wolfe”

Abstract: Two of the most fundamental prototypes of greedy optimization are the matching pursuit and Frank- Wolfe algorithms. In this paper we take a unified view on both classes of methods, leading to the first explicit convergence rates of matching pursuit methods in an optimization sense, for general sets of atoms. We derive sublinear (1/t) convergence for both classes on general smooth objectives, and linear convergence on strongly convex objectives, as well as a clear correspondence of algorithm variants. Our presented algorithms and rates are affine invariant, and do not need any incoherence or sparsity assumptions.

Joint work with Francesco Locatello, Rajiv Khanna, Michael Tschannen


March 31st, 2017, 16:00h INJ 114

      Nicolas Macris, “Bounds for Random Constraint Satisfaction Problems via Spatial Coupling

Abstract: We report on a novel technique called “spatial coupling” and its application in the analysis of random constraint satisfaction problems (CSP). Spatial coupling was invented as an engineering construction in the area of error correcting codes where it has resulted in efficient capacity-achieving codes for a wide range of channels.  However, this technique is not limited to problems in communications, and can be applied in the much broader context of graphical models. We describe here a general methodology for applying spatial coupling to random constraint satisfaction problems and obtain lower bounds for their “coarse” satisfiability threshold. The main idea is to construct a distribution of geometrically structured random K-SAT instances – namely the spatially coupled ensemble – which has the same coarse satisfiability threshold, and is at the same time algorithmically easier to solve. Then by running well-known algorithms on the spatially coupled ensemble we obtain a lower bound on the coarse satisfiability threshold of the 

original ensemble. The method is versatile because one can choose the CSP, there is a certain amount of freedom in the construction of the spatially coupled ensemble, and also in the choice of the algorithm. In this work we focus on random K-SAT but we have also checked that the method is successful for Coloring, NAE-SAT and XOR-SAT. We choose Unit Clause propagation for the algorithm which is analyzed over the spatially coupled instances. For K=3, for instance, our lower bound is equal to 3.67 which is better than the current bounds in the literature. 

March 10th, 2017, 16:00h INJ 114

   Amir Zandieh, “An Adaptive Sublinear-Time Block Sparse Fourier Transform

Abstract: The problem of approximately computing a small number $k$ of dominant Fourier coefficients of a vector of length $n$ quickly, and using few samples in time domain, is known as the Sparse Fourier Transform (sparse FFT) problem. A long line of work on the sparse FFT  has resulted in algorithms with $O(k\log n\log (n/k))$ runtime and $O(k\log n)$ sample complexity. These results are proved using non-adaptive algorithms, and the latter sample complexity result is essentially the best possible under the sparsity assumption alone: It is known that even adaptive algorithms must use $\Omega((k\log(n/k))/\log\log n)$ samples. By adaptive, we mean being able to exploit previous samples in guiding the selection of further samples.

In this work we revisit the sparse FFT problem with the added twist that the sparse coefficients approximately obey a $(k_0,k_1$)-block sparse model. In this model, signal frequencies are clustered in $k_0$ intervals with width $k_1$ in Fourier space, where $k= k_0k_1$ is the total sparsity. Signals arising in applications are often well approximated by this model with $k_0\ll k$.

We give the first sparse FFT algorithm for $(k_0, k_1)$-block sparse signals with the sample complexity of $O^*(k_0k_1 + k_0\log(1+ k_0)\log n)$ at constant signal-to-noise ratios, and sublinear runtime. A similar sample complexity was previously achieved  in the works on model-based compressive sensing using random Gaussian measurements, but used $\Omega(n)$ runtime. To the best of our knowledge, our result is the first sublinear-time algorithm for model based compressed sensing, and the first sparse FFT result that goes below the $O(k\log n)$ sample complexity bound. 

Joint work with Volkan Cevher, Michael Kapralov and Jonathan Scarlett.

March 3rd, 2017, 16:15h INJ 114

    Lucas Maystre, “Parameter estimation of models based on Luce’s axiom

Abstract: Statistical models based on Luce’s choice axiom are widely used to understand and predict outcomes of comparisons between two or more alternatives. As an example, the world chess federation uses one such model to rank chess players worldwide from game outcomes. In this talk, I will present two interesting properties of these models. First, I will show a connection between their maximum-likelihood estimate and the stationary distribution of a particular non-homogeneous Markov chain. This leads to (a) fast spectral estimators and (b) a novel iterative ML algorithm. Second, I will talk about a particular instance of the model that arises when comparisons happen on a network.


Joint work with Matthias Grossglauser, and based on the following papers:

– L. Maystre, M. Grossglauser, Fast and Accurate Inference of Plackett–Luce Models, NIPS 2015 (

– L. Maystre, M. Grossglauser, ChoiceRank: Identifying Preferences from Node Traffic in Networks, 2016 (

February 24th, 2017, 16:15h INJ 114

  Damian Straszak, “Real Stable Polynomials and Computing Partition Functions

Abstract: In this talk I will present some new results on counting and optimization problems in a fairly general model where an oracle access to a polynomial is given and the goal is to compute a certain sum of its coefficients or find the maximum coefficient subject to constraints. We introduce a new convex programming relaxation to solve these problems and show that it provides decent estimates whenever the input polynomial is real stable and the constraints are matroidal.

Based on joint work with Nisheeth K. Vishnoi

November 25th, 2016, 16:15h INJ 114

   Ilija Bogunovic, “A Unified Approach to Bayesian Optimization and Level-Set Estimation with Gaussian Processes

Abstract: Bayesian optimization (BO) is a powerful tool for sequentially optimizing black-box functions that are expensive to evaluate, and has extensive applications including automatic hyperparameter tuning, environmental monitoring, and robotics.  The problem of level-set estimation (LSE) with Gaussian processes is closely related; instead of performing optimization, one seeks to classify the whole domain according to whether the function lies above or below a given threshold, which is also of direct interest in applications.

In this talk, we present a new algorithm, truncated variance reduction (TruVaR) that addresses Bayesian optimization and level-set estimation in a unified fashion. The algorithm greedily shrinks a sum of truncated variances within a set of potential maximizers (BO) or unclassified points (LSE), which is updated based on confidence bounds.  TruVaR is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms, including pointwise costs, non-uniform noise, and multi-task settings.  We provide a general theoretical guarantee for TruVaR covering these phenomena, and use it to obtain regret bounds for several specific settings. 

November 18th, 2016, 15:45h INJ 114

   Sangxia Huang, “Noisy population recovery in polynomial time

Abstract: In the noisy population recovery problem, the goal is to learn an unknown distribution f on binary strings of length n from noisy sample. A noisy sample with parameter 0<=mu<=1 is generated by taking a sample from f, and independently flipping each bit of the sample with probability (1-mu)/2. Let k be the size of the support of the distribution, and the goal is to estimate the probability of any string to within some given additive error epsilon.

In this talk, I describe an algorithm that for each mu>0, provides the desired estimate of the distribution in time bounded by a polynomial in k, n and 1/epsilon, improving upon the previous best result of poly(k^{log log k}, n, 1/epsilon) due to Lovett and Zhang.

Based on the paper “Noisy population recovery in polynomial time” by Anindya De, Michael Saks, Sijian Tang.

November 11th, 2016, 16:15h INJ 114

    Volkan Cevher, “Time-Data Trade-offs by Aggressive Smoothing

Abstract: We propose a tradeoff between sample complexity and computation time that applies to statistical estimators based on convex optimization. As the amount of data increases, we can smooth optimization problems more and more aggressively to achieve accurate estimates more quickly. This talk provides theoretical and experimental evidence of this tradeoff for a class of regularized linear inverse problems. 


October 28th, 2016, 16:15h INJ 114

     Christos Kalaitzis, “Scheduling jobs with uniform Smith ratios to minimize the weighted sum of completion times

Abstract: We consider the problem of scheduling jobs on unrelated machines in order to minimize the weighted sum of completion times. For this problem, the best known approximation guarantee is $3/2-\epsilon$, for some small $\epsilon$, due to Bansal et al. In this talk, we will focus on the specific case of the problem, where the involved jobs have uniform weight/size ratios (also known as Smith ratios). For this restricted version of the problem, we show how to achieve an approximation guarantee of 1.21. We do this by providing a randomized rounding scheme, which rounds solutions to the Configuration-LP relaxation of the problem. Our main technical contribution is analyzing this rounding scheme, by comparing the cost of the output distribution of this randomized rounding to the cost of a specific class of worst-case output distributions. 

Joint work with Ola Svensson and Jakub Tarnawski. 


October 21st, 2016, 16:15h INJ 114

       Ameya Velingker, “Bridging the Capacity Gap Between Interactive and One-Way Communication

Abstract: We study the communication rate of coding schemes for interactive communication that transform any two-party interactive protocol into a protocol that is robust to noise.

Recently, [Haeupler ’14] showed that if an eps > 0 fraction of transmissions are corrupted adversarially or randomly, then it is possible to achieve a communication rate of 1-sqrt(eps)) and went on to conjecture that this rate is optimal for general input protocols. This stands in contrast to the classical setting of one-way communication in which error-correcting codes are known to achieve an optimal communication rate of 1-Theta(H(eps)) = 1 – Theta(eps*log(1/eps)).

In this work, we show that the quadratically smaller rate loss of the one-way setting can also be achieved in interactive coding schemes for a very natural class of input protocols. We introduce the notion of average message length, or the average number of bits a party sends before receiving a reply, as a natural parameter for measuring the level of interactivity in a protocol. Moreover, we show that any protocol with average message length l = Omega(poly(1/eps)) can be simulated, with high probability, by a longer protocol with an optimal communication rate of 1-Theta(H(eps)) over an oblivious adversarial channel with error fraction eps.

This shows that the capacity gap between one-way and interactive communication can be bridged even for very small (constant in eps) average message lengths, which is very likely to be found in many applications.

Joint work with Bernhard Haeupler. 


October 7th, 2016, 16:15h INJ 114

      Abbas Bazzi, “On independent sets, 2-to-2 Games and Grassmann Graphs

Abstract: I will be presenting some of the ideas of the paper “On independent sets, 2-to-2 Games and Grassmann Graphs” by Subhash Khot, Dor Minzer and Muli Safra. 

The main contribution of the paper is a combinatorial hypothesis, which if correct, would imply that the Vertex Cover problem is NP-hard to approximate within a factor of sqrt(2)-epsilon, improving upon the 1.36-hardness result of Dinur and Safra. 

Since the reduction yielding the hardness of the Vertex Cover (assuming this hypothesis) is along the same line of previous reductions, I will only be presenting this new hypothesis, and giving the authors’ intuition behind suggesting it. 



September 30, 2016, 15:15h INJ 114

      Jakub Tarnawski, “Bipartite Perfect Matching is in quasi-NC

Abstract: Perfect Matching is a fundamental polytime-solvable problem. It has been known since the 70’s that it is efficiently parallelizable if we allow randomization (i.e., it is in the class RNC). However, it is still not known whether the randomization is truly necessary (i.e., whether it is in the class NC).

One way to obtain an RNC-algorithm involves finding a set of edge weights such that the minimum-weight perfect matching is unique (provided a perfect matching exists). This can be done using an elegant tool called the Isolation Lemma, which allows us to pick the weights randomly. I will show how to almost derandomize this lemma in the bipartite case, obtaining an algorithm in quasi-NC. Since the result is very elegant, I should be able to present the full proof.

Based on the paper “Bipartite Perfect Matching is in quasi-NC” by Fenner, Gurjar and Thierauf (STOC 2016).


September 23, 2016, 15:15h INJ 114

      Justin Ward, “A New Framework for Distributed Submodular Maximization

Abstract: A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. A lot of recent effort has been devoted to developing distributed algorithms for these problems. However, these results suffer from high number of rounds, suboptimal approximation ratios, or both. We develop a framework for bringing existing algorithms in the sequential setting to the distributed setting, achieving near optimal approximation ratios for many settings in only a constant number of MapReduce rounds. Our techniques also give a fast sequential algorithm for non-monotone maximization subject to a matroid constraint.  

In this talk, I will present the main algorithmic ideas and meta-theorems behind both the 2-round and multi-round frameworks for monotone maximization, and give a brief sketch of some of the generalizations to other problems.  This is based on joint work with Rafael da Ponte Barbosa, Alina Ene, and Huy Le Nguyen (to appear in FOCS ’16).

September 2, 2016, 15:15h INJ 114

   Ashkan Norouzi, “(Even) Smarter Tools for (Citi)Bike Sharing

Abstract: As bike share systems grow in urban areas, many researchers have focused on improving operations and addressing new challenges for these systems. One challenge is to counter non-symmetric demand by redistributing bikes. This paper addresses the optimization problems that arise from these rebalancing efforts. In particular, we expand upon previous research for overnight rebalancing and introduce new optimization problems for midday rebalancing. We report the impact our methods have on New York City’s bike share system. 

August 12, 2016, 15:15h INJ 114

Slobodan Mitrovic, “Separated Sparsity in Nearly-Linear Time

Abstract: The goal in compressive sensing is to reconstruct a sparse signal from a small number of linear measurements. While sparsity captures a wide range of structures in high-dimensional data, it is possible to reduce the number of measurements even further by utilizing more detailed signal models. One such signal model is separated sparsity, which has been proposed as a model of data arising from neuronal spike trains.

Prior work shows that separated sparsity can lead to a provably smaller sample complexity in compressive sensing. However, this improvement comes at a cost: the best known algorithm is based on dynamic programming and has a time complexity of $O(n^2)$, which quickly becomes infeasible for large data sets. Moreover, the slow running time is in stark contrast to the nearly-linear running time for compressive sensing with “standard” sparsity.

We address this issue and give a new algorithm for separated sparsity that achieves the best of both worlds: it requires only nearly-linear running time while achieving the same improved sample complexity as prior work. Our approach is based on a linear programming (LP) formulation, in which we utilize both primal and dual structure to design a fast algorithm. Interestingly, we can solve the LP faster than the natural dynamic programing approach to this problem.

This is a joint work with A. Madry and L. Schmidt


July 14, 2016, 15:15h INJ 114

   Adam Polak, “Why is it hard to compute edit distance in strongly subquadratic time?

Abstract: The edit distance is a well known similarity measure between two strings. It can be easily computed in quadratic time using a standard dynamic programming approach taught in basic algorithms courses. Despite it being extensively studied, both from the theoretical and practical perspective, there is still no known strongly subquadratic time algorithm for the problem. Recently, some explanation for this situation was discovered.

During this talk I will prove that any substantial improvement over quadratic time algorithm for the edit distance would imply substantial improvement over O(2^N) time algorithm for the N variables CNF-SAT and thus refute the Strongly Exponential Time Hypothesis.

The talk is based on the paper “Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false)” by Arturs Backurs and Piotr Indyk.

June 10, 2016, 16:15h INJ 114

     Stephen Chestnut (ETH Zurich), “L2 Heavy hitters in optimal space and time

Abstract: The task of finding heavy hitters is one of the best known and well studied problems in the area of data streams. In a sense, the strongest guarantee available is the L2 guarantee, which requires finding all items that occur at least eps*||f|| times in the stream, where the i-th coordinate of the vector f is the number of occurrences of i in the stream.  In this talk I will explain a new algorithm that allows one to find L2 heavy hitters in O(log n) bits of memory and O(1) update time in insertion only streams for constant eps.  This is optimal and it beats the previous best algorithm, which used O(log n loglog n) bits of memory.  The improvements rely on a deeper understanding of the AMS sketch (Alon, Matias, and Szegedy STOC’96) and similar sketches and draw on the theory of Gaussian processes.


This talk is based on joint work with Vladimir Braverman, Nikita Ivkin, Jelani Nelson, Zhengyu Wang, and David P. Woodruff in arxiv:1603.00759. 


June 3, 2016, 16:15h INJ 114

      Sangxia Huang, “Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning

Abstract: In the problem of parity learning, an unknown n-bit string x was chosen uniformly at random. A learner tries to learn x from a stream of samples (a_1,b_1), (a_2,b_2) …, where each a_i is a uniformly sampled n-bit string, and b_i is the dot product of a_i and x, modulo 2. We show that any algorithm for parity learning that uses less than n^2/25 bits of memory requires an exponential number of samples.

The talk is based on a paper by Ran Raz.


May 27, 2016, 16:15h INJ 114

   Jonathan Scarlett, “Phase Transitions in Group Testing

Abstract: We study the fundamental performance limits of the group testing problem, in which one seeks to determine a sparse subset of a set of items that are defective, based on a set of possibly noisy tests. For both noisy/noiseless variants and exact/approximate recovery criteria, we provide exact asymptotic thresholds on the number of tests above which the error probability vanishes, but below which the error probability tends to one (i.e., phase transitions). Our analysis takes a new approach based on thresholding techniques from information theory (e.g., appearing in information-spectrum techniques), departing from existing approaches based on maximum-likelihood decoding and Fano’s inequality.  Finally, we compare our fundamental limits to the performance guarantees of practical recovery algorithms.

May 20, 2016, 16:15h INJ 114

   Amir Zandieh, “The Restricted Isometry Property of Subsampled Fourier Matrices

Abstract:  A matrix A ∈ C^{q×N} satisfies the restricted isometry property of order k with constant ε if it preserves the ℓ2 norm of all k-sparse vectors up to a factor of 1 ± ε. We prove that a matrix A obtained by randomly sampling q = O(k · log^2 k · log N) rows from an N × N Fourier matrix satisfies the restricted isometry property of order k with a fixed ε with high probability. This improves on the result of Rudelson and Vershynin (Comm. Pure Appl. Math., 2008), its subsequent improvements, and Bourgain (GAFA Seminar Notes, 2014) 

The talk is based on a paper by Haviv and Regev (SODA’16).

May 6, 2016, 16:15h INJ 114

    Damian Straszak, “On a Natural Dynamics for Linear Programming

Abstract:  In this talk I will introduce a new approach for solving min cost flow, and more generally, linear programming problems. The method is based on a dynamical system, arising originally from a study of a slime mold – Physarum polycephalum. This dynamical system is described by a system of differential equations, which essentially defines a motion guiding a point (a candidate solution) towards optimality. Such a dynamical system can be then discretized to yield algorithms for problems of interest. I will explain how to analyze such dynamical systems, how to discretize them and how are they related to interior point methods in linear programming. I will also discuss some related open problems.

April 29, 2016, 16:15h INJ 114

   Divesh Aggarwal, “Solving the Closest Vector Problem in 2^n time – The Discrete Gaussian Strikes Again!

Abstract: In this talk, I will present a $2^{n+o(n)}$-time and space randomized algorithm for solving the Closest Vector Problem on n-dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic $4^{n+o(n)}$-time and $2^{n+o(n)}$-space algorithm of Micciancio and Voulgaris.

The main ingredient will be a $2^{n+o(n)}$ time algorithm to solve the problem of discrete Gaussian sampling over lattice shifts, L – t, with very low parameters. This yields an algorithm for approximate CVP with the very good approximation factor $\gamma = 1+2^{-n/\log^2 n}$. 

If time permits, I will talk briefly about how to extend this algorithm to solve exact CVP. 

This is joint work with Daniel Dadush and Noah Stephens-Davidowitz.

April 21, 2016, 16:15h INJ 114

    Abbas Bazzi, “Streaming submodular maximization

Abstract: In the offline version of this problem, we are given an integer k and oracle accesses to a submodular function f:{0,1}^n \to \R^+, and the goal is find a subset S of [n] of size at most k with maximum possible value f(S). A simple iterative greedy algorithm due to Nemhauser et al. achieves a (1-1/e)-approximation for this problem, and this is essentially the best approximation ratio that one can hope for in this model.  In each iteration of the algorithm, the element with the highest marginal value is picked, and hence it is crucial to have access to all the elements in the ground set before doing any update.

We consider the problem in the streaming model, i.e., we get the elements one at a time and we process them in that order. The goal in this model to design a streaming algorithm with a good approximation ratio while minimizing:

1. The number of passes that the algorithm needs to make over the data stream

2. The memory required by the algorithm.

3. The running time of the algorithm in terms of the oracle queries.

We present in this talk a (1/2 – ε)-approximation algorithm that makes only one pass over the data stream, uses O(k log (k)/ε) memory and runs in O(n log(k)/ε) time.

This talk is based on a recent work by Badanidiyuru, Mirzasoleiman, Karbasi and Krause.

April 15, 2016, 16:15h INJ 114

     Christos Kalaitzis, “On the hardness of finding large matchings with distributed algorithms

Abstract: In this talk, I will be talking about the hardness of multiple agents jointly computing a  large matching, using a limited amount of information. Specifically, consider the setting where many agents( which correspond to vertices of a graph) are given part of the input of the problem we want to solve(in this case, part of the edge set of the input graph), and they want to find a large matching on this graph. We will present a result by Alon, Nisan, Raz and Weinstein, which shows that, if the agents execute an $r$-round protocol, then they must exchange at least $n^f(r)$ bits of information each, in order to compute a matching whose size is at least an $n^{-f(r)}$-fraction of the optimal matching, for some function f which is inverse exponential in r.

April 8, 2016, 16:15h INJ 114

      Milos Nikolic, “New Developments in the Theory of Join Algorithms

Abstract: Evaluating the relational join is one of the central algorithmic and most well-studied problems in database systems. A staggering number of variants have been considered including Block-Nested loop join, Hash-Join, Grace, and Sort-merge. Commercial database engines use finely tuned join heuristics that take into account a wide variety of factors including the selectivity of various predicates, memory, IO, etc. In spite of this study of join queries, the textbook description of join processing is suboptimal. In this talk, we discuss recent results on join algorithms that have provable worst-case optimality runtime guarantees. We provide a simpler and unified description of these algorithms that we hope is useful for theory-minded readers, algorithm designers, and systems implementors.

March 18, 2016, 16:15h INJ 114

     Ruediger Urbanke, “The Area Theorem and Reed-Muller Codes

Abstract: The area theorem can be thought of as a conservation law for error correcting codes. It is a mathematical formulation of the fact that there are no “good” or “bad” codes, only codes of different characteristic. I will start from scratch and first show the very simple derivation of the area theorem and then discuss its consequences. 

I will discuss what it tells us about capacity-achieving codes. In particular I will review a recent result by Kudekar, Kumar, Mondelli, Pfister, Sasoglu and Urbanke that shows that Reed-Muller codes achieve capacity on the binary erasure channel where the area theorem plays a crucial role.

March 11, 2016, 16:15h INJ 114

     Aditya Ayyar, “Approximating matchings in dynamic streams

Abstract: The maximum matching problem is a core algorithmic problem in computer science, and has a variety of applications in Internet advertising, economics, etc. Besides classical polynomial time solutions, which assume that the entire graph can be loaded into memory, efficient solutions have recently been developed in the streaming model of computation, where the graph is presented to the algorithm as a stream of edges and the space available is substantially smaller than the size of input graph.  Not nearly as much is known about the dynamic streaming setting, where the algorithm is restricted in space usage and at the same time the graph stream can contain both insertions and deletions. 

In this talk, I will present a single pass dynamic streaming algorithm that for any \eps>0 recovers an n^\eps-approximate maximum matching from a linear sketch of the input graph into dimension \tilde O(n^{2−3\eps}).  I will also show that this bound is essentially tight. Namely, any linear sketch for approximating the maximum matching to within a factor of n^\eps must use at least n^{2-3\eps-o(1)} bits.

This talk is based on a paper by Assadi, Khanna, Li and Yaroslavtsev (2015).

March 4, 2016, 16:15h INJ 114

   Michael Kapralov, “Approximately counting triangles in sublinear time

Abstract: Subgraph counting, the problem of counting occurrences of query graphs in a large data graph, is fundamental to several domains such as genomics and social network analysis, where even special cases such as triangle counting are of interest. In many applications exact counting is not necessary, and approximation can be tolerated. In this talk I will present an algorithm for approximately counting the number triangles in an input graph in time *sublinear* in the size of the input. Specifically, I will show that for any precision parameter \eps>0  a (1+\eps)-approximation to the number of triangles in a graph G with n vertices, m edges and t triangles can be obtained in time O^*(n/t^{1/3}+m^{3/2}/t), where O^* hides polylogairithmic dependence on n and polynomial dependence on 1/\eps. Note that when the number of triangles t satisfies t=\Omega(m^{1/2+\delta}) for a constant \delta>0, the algorithm does not even read the entire graph. The sample complexity and runtime of this algorithm are optimal up to polylogarithmic factors for any constant \eps>0.

This talk is based on results of Eden, Levi, Ron and Seshadhri (FOCS’15).

Apr 16, 2015, 16:15h INF 211

Tsz Chiu Kwok, “Improved Cheeger’s Inequality and Analysis of Local Graph Partitioning using Vertex Expansion and Expansion Profile”

Abstract: We prove two generalizations of the Cheeger’s inequality. The first generalization relates the second eigenvalue to the edge expansion and the vertex expansion of the graph G, λ2=Ω(ϕV(G)ϕ(G)), where ϕV(G) denotes the robust vertex expansion of G and ϕ(G) denotes the edge expansion of G. The second generalization relates the second eigenvalue to the edge expansion and the expansion profile of G, for all k≥2, λ2=Ω(ϕk(G)ϕ(G)/k), where ϕk(G) denotes the k-way expansion of G. These show that the spectral partitioning algorithm has better performance guarantees when ϕV(G) is large (e.g. planted random instances) or ϕk(G) is large (instances with few disjoint non-expanding sets). Both bounds are tight up to a constant factor.
Our approach is based on a method to analyze solutions of Laplacian systems, and this allows us to extend the results to local graph partitioning algorithms. In particular, we show that our approach can be used to analyze personal pagerank vectors, and to give a local graph partitioning algorithm for the small-set expansion problem with performance guarantees similar to the generalizations of Cheeger’s inequality. We also present a spectral approach to prove similar results for the truncated random walk algorithm. These show that local graph partitioning algorithms almost match the performance of the spectral partitioning algorithm, with the additional advantages that they apply to the small-set expansion problem and their running time could be sublinear. Our techniques provide common approaches to analyze the spectral partitioning algorithm and local graph partitioning algorithms.

This is a joint work with Lap Chi Lau and Yin Tat Lee.

Mar 19, 2015, 16:00h INF 211

Ola Svensson, “Approximating ATSP by Relaxing Connectivity”

Abstract: The standard LP relaxation of the asymmetric traveling salesman problem has been conjectured to have a constant integrality gap in the metric case. We prove this conjecture when restricted to shortest path metrics of node-weighted digraphs. Our arguments are constructive and give a constant factor approximation algorithm for these metrics. We remark that the considered case is more general than the directed analog of the special case of the symmetric traveling salesman problem for which there were recent improvements on Christofides’ algorithm.

The main idea of our approach is to first consider an easier problem obtained by significantly relaxing the general connectivity requirements into local connectivity conditions. For this relaxed problem, it is quite easy to give an algorithm with a guarantee of 3 on node-weighted shortest path metrics. More surprisingly, we then show that any algorithm (irrespective of the metric) for the relaxed problem can be turned into an algorithm for the asymmetric traveling salesman problem by only losing a small constant factor in the performance guarantee. This leaves open the intriguing task of designing a “good” algorithm for the relaxed problem on general metrics.


Feb 12, 2015, 15:15h INJ 114

Shayan Oveis Gharan, University of Washington, “Effective Resistance Reducing Flows, Spectrally Thin Trees and Asymmetric TSP

Abstract: We show that the integrality gap of the natural LP relaxation of the Asymmetric Traveling Salesman Problem (ATSP) is at most polyloglog(n). In other words, there is a polynomial time algorithm that approximates the value of the optimum tour within a factor of polyloglog(n).
This is the first significant improvement over the classical O~(log n) approximation factor for ATSP. Our proof builds on recent developments in operator theory, in particular recent resolution of the Kadison Singer conjecture by Marcus, Spielman and Srivastava. In this talk, I will highlight the main ideas of our proof.
This is a joint work with Nima Anari.

(A follow-up of the Seminar talk on Tuesday, February 10, 2015.)

Dec 17, 2014, 15:00h (INF 211)

   David Steurer (Cornell University):Lower bounds on the size of semidefinite programming relaxations”


Abstract: We introduce a method for proving lower bounds on the efficacy of semidefinite programming (SDP) relaxations for combinatorial problems. In particular, we show that the cut, TSP, and stable set polytopes on n-vertex graphs are not the linear image of the feasible region of any SDP (i.e., any spectrahedron) of dimension less than 2nδ, for some constant δ>0. This result yields the first super-polynomial lower bounds on the semidefinite extension complexity of any explicit family of polytopes.

Our results follow from a general technique for proving lower bounds on the positive semidefinite rank of a matrix. To this end, we establish a close connection between arbitrary SDPs and those arising from the sum-of-squares SDP hierarchy. For approximating maximum constraint satisfaction problems, we prove that SDPs of polynomial-size are equivalent in power to those arising from degree-O(1) sum-of-squares relaxations. This result implies, for instance, that no family of polynomial-size SDP relaxations can achieve better than a 7/8-approximation for MAX 3 SAT.


Joint work with James R. Lee, Prasad Raghavendra.


December 12, 2014, 15:40h INF 211

Moran Feldman, “Aspects of Submodular Maximization Subject to a Matroid Constraint”

Abstract: Submodular functions form a large natural class of set functions with applications in many fields including social networks, machine learning and game theory. Optimization of submodular functions subject to various constraints attracted much attention in recent years, both from theoretical and practical points of view. This talk considers the problem of maximizing a submodular function subject to a matroid constraint, which is a central problem demonstrating many of the modern approaches and techniques used in the field of submodular maximization. Many aspects of this problem have been studied, including its polynomial time approximability, fast (almost linear time) algorithms and online models. This talk surveys some of these aspects and explores a few of the main results obtained recently.

December 12, 2014, 15:00h INF 211

Jakub Tarnawski, “Fast Generation of Random Spanning Trees and the Effective Resistance Metric”

Abstract: We present a new algorithm for generating a uniformly random spanning tree in an undirected graph. Our algorithm samples such a tree in expected $O(m^{\frac{4}{3}+o(1)})$ time. This improves over the best previously known bound of $\min(\tO(m\sqrt{n}),O(n^{\ omega}))$ — that follows from the work of Kelner and Mądry [FOCS’09] and of Colbourn et al. [J. Algorithms’96] — whenever the input graph is sufficiently sparse.

At a high level, our result stems from carefully exploiting the interplay of random spanning trees, random walks, and the notion of effective resistance, as well as from devising a way to algorithmically relate these concepts to the combinatorial structure of the graph. This involves, in particular, establishing a new connection between the effective resistance metric and the cut structure of the underlying graph.

This is a joint work with Aleksander Mądry and Damian Straszak.

December 5, 2014, 15:45h INF 211

Chidambaram Annamalai, “Combinatorial Algorithm for Restricted Max-Min Fair Allocation”

Abstract: We study the basic allocation problem of assigning resources to players so as to maximize fairness. This is one of the few natural problems that enjoys the intriguing status of having a better estimation algorithm than approximation algorithm. Indeed, a certain configuration-LP can be used to estimate the value of the optimal allocation to within a factor of 4+ϵ. In contrast, however, the best known approximation algorithm for the problem has an unspecified large constant guarantee.

In this paper we significantly narrow this gap by giving a 13-approximation algorithm for the problem. Our approach develops a local search technique introduced by Haxell [Hax95] for hypergraph matchings, and later used in this context by Asadpour, Feige, and Saberi [AFS12]. For our local search procedure to terminate in polynomial time, we introduce several new ideas such as lazy updates and greedy players. Besides the improved approximation guarantee, the highlight of our approach is that it is purely combinatorial and uses the configuration-LP only in the analysis.

This is joint work with Christos Kalaitzis and Ola Svensson.

December 5, 2014, 15:15h INF 211

Ashkan Norouzi, “Dynamic Facility Location Problem via Exponential Clocks”

Abstract: The \emph{dynamic facility location problem} is a generalization of the classic facility location problem proposed by Eisenstat, Mathieu, and Schabanel to model the dynamics of evolving social/infrastructure networks. The generalization lies in that the distance metric between clients and facilities changes over time. This leads to a trade-off between optimizing the classic objective function and the “stability” of the solution: there is a switching cost charged every time a client changes the facility to which it is connected. While the standard linear program (LP) relaxation for the classic problem naturally extends to this problem, traditional LP-rounding techniques do not, as they are often sensitive to small changes in the metric resulting in frequent switches.

We present a new LP-rounding algorithm for facility location problems, which yields the first constant approximation algorithm for the dynamic facility location problem. Our algorithm installs competing exponential clocks on the clients and facilities, and connect every client by the path that repeatedly follows the smallest clock in the neighborhood. The use of exponential clocks gives rise to several properties that distinguish our approach from previous LP-roundings for facility location problems. In particular, we use \emph{no clustering} and we allow clients to connect through paths of \emph{arbitrary lengths}. In fact, the clustering-free nature of our algorithm is crucial for applying our LP-rounding approach to the dynamic problem.

September 26, 2014, 15:15h INJ 114

Hang Nguyen Viet, “Matroidal Degree-Bounded Minimum Spanning Trees”

Abstract: We consider the minimum spanning tree (MST) problem under the restriction that for every vertex v, the edges of the tree that are adjacent to v satisfy a given family of constraints. A famous example thereof is the classical degree-constrained MST problem, where for every vertex v, a simple upper bound on the degree is imposed. Iterative rounding/relaxation algorithms became the tool of choice for degree-bounded network design problems. A cornerstone for this development was the work of Singh and Lau, who showed for the degree-bounded MST problem how to find a spanning tree violating each degree bound by at most one unit and with cost at most the cost of an optimal solution that respects the degree bounds.

However, current iterative rounding approaches face several limits when dealing with more general degree constraints. In particular, when several constraints are imposed on the edges adjacent to a vertex v, as for example when a partition of the edges adjacent to v is given and only a fixed number of elements can be chosen out of each set of the partition, current approaches might violate each of the constraints by a constant, instead of violating all constraints together by at most a constant number of edges. Furthermore, it is also not clear how previous iterative rounding approaches can be used for degree constraints where some edges are in a super-constant number of constraints.


We extend iterative rounding/relaxation approaches both on a conceptual level as well as aspects involving their analysis to address these limitations. This leads to an efficient algorithm for the degree-constrained MST problem where for every vertex v, the edges adjacent to v have to be independent in a given matroid. The algorithm returns a spanning tree T of cost at most OPT, such that for every vertex v, it suffices to remove at most 8 edges from T to satisfy the matroidal degree constraint at v.

Based on paper Matroidal Degree-Bounded Minimum Spanning Trees by Rico Zenklusen.

September 19, 2014, 15:15h INJ 114

Ashkan Norouzi, “LP-Based Algorithm for Capacitated Facility Location”

Abstract: I will present a linear programming relaxation with constant integrality gap for capacitated facility location problem.

Based on paper LP-Based Algorithms for Capacitated Facility Location, by Hyung-Chan An, Mohit Singh, Ola Svensson.

September 2, 2014, 15:30h INF 211

Metin Balaban, “Bidirected Cut Relaxation for the Metric Steiner Tree Problem”

Abstract: Proving that the integrality gap of bidirected cut relaxation for Steiner tree problem is less than 2 is a long-standing open problem. This talk aims to discuss the methods to determine the integrality gap of BCR for certain classes of graphs and its relation with other relaxations for the problem.​

September 2, 2014, 15:00h INF 211

Amirbehshad Shahrasbi, “LP-based Bounds for k-Capacitated Facility Location Problem”

Abstract: In this talk I am going to define the Facility Location Problem (FLP) and specifically talk about some results regarding the special variant of the FLP on which I was concentrating during my summer internship.

In fact, there exists an infinite integrality gap between the well-known LP relaxation of the Capacitated Facility Location Problem, the variant of the FLP in which each facility can be opened at most once. We have tried to change this problem a little bit by adding the option of opening each facility twice (or even for some constant times) and we wonder if there exists any LP-based approximation algorithm for these problems. I am going to talk about some relevant results to our approach that may contribute to our problem.​

August 21, 2014, 15:15h MA B1 524

Chidambaram Annamalai, “The Lovász Local Lemma as a Random Walk”

Abstract: I will present the following very nice result of Achlioptas and Iliopoulos that establishes the LLL as a Random Walk. Their abstract follows.

We give an algorithmic local lemma by establishing a sufficient condition for the uniform random walk on a directed graph to reach a sink quickly. Our work is inspired by Moser’s entropic method proof of the Lov\'{a}sz Local Lemma for satisfiability and completely bypasses the Probabilistic Method formulation of the LLL. In particular our method works when the set of underlying objects is entirely unstructured. Similarly to Moser’s argument, the key point is that algorithmic progress is measured in terms of entropy rather than energy (number of violated constraints) so that termination can be established even under the proliferation of states in which every step of the algorithm (random walk) increases the total number of violated constraints.

Paper link:

August 8, 2014, 15:15h INJ 114

Slobodan Mitrović, “2-Edge Connectivity in Directed Graphs”

Abstract: Given a directed graph G = (V, A), we say that two its vertices v and w are 2-edge-connected if there are two edge-disjoint paths from v to w and two edge-disjoint paths from w to v. This relation partitions V into blocks such that all vertices in the same block are 2-edge-connected. Differently from the undirected case, those blocks do not correspond to the 2-edge-connected components of the graph.

We will show how to compute the 2-edge-connected-blocks relation of G in time O(|V| + |A|).

I will present the result from paper 2-Edge Connectivity in Directed Graphs, by Loukas Georgiadis, Giuseppe F. Italiano, Luigi Laura, Nikos Parotsidis.

August 4, 2014, 15:15h INJ 114

Lorenzo Orecchia, MIT, “A Faster and Simpler Width-Independent Algorithm for Solving Positive Linear Programs in Parallel

Abstract: Multiplicative Weight Updates (MWUs) are a powerful tool for solving a number of convex optimization problems. However, even for the simple case of linear programs, they generally do not yield polynomial-time algorithms, as their convergence depends polynomially on the “width” of the problem, i.e the magnitude of the largest entry in the constraint matrix.

Surprisingly, this dependence can be eliminated for a variety of packing and covering problems. This fact has been exploited to yield polynomial-time algorithms for many combinatorial problems, including directed and undirected maximum flow. Such algorithms mostly rely on combinatorial insights that share a primal-dual flavor. In this talk, I will use optimization tools to describe a simple width-independent solver for packing LPs. Besides improving on the best known parallel running time, our solver is particularly simple and highlights how gradient descent and MWUs can be combined to obtain faster primal-dual algorithms.

This talk is based on the following paper:

July 17, 2014, 15:15h INJ 114

Christos Kalaitzis, “Some Results on the Hardness of Submodular Optimization”

Abstract: Concerning submodular function optimization, there are two main approaches for proving hardness of approximation: proving that in the value oracle model we need a huge number of queries to beat a certain approximation ratio, or proving that, when we are given a succinct representation of a submodular function as input, the existence of a polynomial time approximation algorithm with a certain guarantee would disprove some major complexity theoretic assumption. In this talk, I will describe a 1/2+\eps hardness result(under the NP!=RP assumption) for symmetric nonnegative submodular maximization, and I will try to sketch how the involved techniques can be extended to more general families of submodular functions.

Based on the paper:From Query Complexity to Computational Complexity by S. Dobzinski and J. Vondrak.

July 15, 2014, 15:00h INF 211

Abbas Bazzi, “Hardness of Approximation and Iterative Rounding Algorithms”

Abstract: For many optimization problems, there is no known polynomial-time algorithm that computes the exact optimal value. One of the most common approaches in literature used to tackle these problems, is designing a polynomial time algorithm that computes a solution that is “good enough”, i.e. within an additive/multiplicative factor $\alpha$ of the optimal solution. However, a natural question that arises in this context is how close can we get to the optimal value assuming P $\neq$ NP. Such results are known as inapproximability results, and are well studied in the literature.
In this report, we investigate the tools used to prove inapproximability results for two well known NP-hard problems, namely the Hypergraph Vertex Cover problem and a variant of the Constrained Satisfaction Problem known as MAX-E3LIN2. We also present an additive one approximation algorithm for the Minimum Bounded-Degree Spanning Tree (MBDST) problem.

This presentation is a dry-run of my candidacy exam.

July 9, 2014, 15:00h INF 211

Ashkan Norouzi, “Hardness of Approximation and Randomized Rounding”

Abstract: Approximation algorithms are one of the best ways to deal with NP-hard problems. During the past decades, many different techniques and tools have been used in order to design approximation algorithms with good guarantees. Dependent randomized rounding is one of these techniques which is motivated by several applications, such as constrained submodular function maximization.

We present the randomized swap rounding algorithm, and show how useful it is in solving problems. Also by using negatively correlated rounding, we describe an approximation algorithm for Uncapacitated Facility Location problem which is one of the major problems in Combinatorial Optimization. The algorithm works by rounding an optimal fractional solution to a linear programing relaxation. This algorithm also uses linear programing duality and gives a significant improvement on the previously known LP-based approximation algorithms.

At the end, we discuss techniques used for proving hardness of approximation.We present hardness result for the MAX-3SAT, vertex cover, and clique problems. To do so, we introduce the PCP Theorem and gap-introducing reductions as the main techniques behind these results.

This presentation is a dry-run of my candidacy exam.


  For meetings before June 30, 2014 please see here.