Skip to main content

Seminars1415

Term 1
Tue 7 Oct 2014
15:00 in E245
Ioannis Ivrissimtzis (Durham)
Information hiding in 3D models

Information hiding refers to the embedding of messages in digital covers such as images, video, or 3D models. In the first part of the talk, we discuss two information hiding algorithms for 3D models. The first is based on Least Significant Bit substitution and offers control over the distortion of the normals of the model. The second algorithm encodes the message on the Laplacian coordinates of triangle meshes and is resistant to mesh editing operations. In the second part of the talk, we discuss two steganalytic algorithms for detecting the existence of embedded messages in 3D models. The first is a universal steganalytic algorithm based on machine learning techniques developed for image steganalysis. The second algorithm detects in some natural statistics of the model a form of bimodality introduced by certain embedding algorithms. Joint work with Ying Yang, Norbert Peyerimhoff, Ruggero Pintus and Holly Rushmeier.

Tue 14 Oct 2014
15:00 in E245
Marcin Pilipczuk (Warwick)
Subexponential Parameterized Complexity of Completion Problems

Let P be a fixed hereditary graph class. In the P-Completion problem, given a graph G and an integer k, we ask whether it is possible to add at most k edges to G to obtain a member of P. In the recent years completion problems received significant attention from the perspective of parameterized complexity, with the standard parameterization by k. In my talk I will first survey the history of the study of parameterized complexity of completion problems, including the breakthrough paper of Villanger et al that settles fixed-parameter tractability of Interval Completion, as well as recent advancements on polynomial kernelization.

Then, we will move to the main topic of the talk, namely subexponential parameterized algorithms. First fixed-parameter algorithms for completion problems focused mostly on the ‘forbidden induced subgraphs’ definition of the graph class P in question. In 2012 Fomin and Villanger came up with a novel idea to instead focus on some structural definition of the class P, trying to build the modified output graph by dynamic programming. Slightly simplifying, we may say that the main technical contribution of Fomin and Villanger is a bound of at most exp(sqrt(k)log(k)) reasonable ‘partial chordal graphs’ for an input instance (G, k) of Chordal Completion. Consequently, Chordal Completion can be solved in exp(sqrt(k)log(k))+poly(n) time. Following the approach of Fomin and Villanger, in the past two years subexponential parameterized algorithms were shown for the class of chain, split, threshold, trivially perfect, pseudosplit and, very recently, proper interval and interval graphs. Moreover, a few lower bounds for related graph classes were found.

In my talk I will sketch the approach of Fomin and Villanger and survey the main ideas needed in the algorithms.

Tue 21 Oct 2014
15:00 in E245
Matthew Johnson (Durham)
Finding Paths between Graph Colourings

The reconfiguration graph of the k-colourings of a graph G contains as its vertex set the proper vertex k-colourings of G, and two colourings are joined by an edge in the reconfiguration graph if they differ in colour on just one vertex of G. We discuss research into the structure of reconfiguration graphs and the computational complexity of recognizing whether or not a pair of colourings belong to the same component of the reconfiguration graph (that is, the complexity of deciding if it is possible to transform one colouring into another with a sequence of “recolouring” steps that each change the colour on just one vertex without ever breaking the constraint that neighbours are not coloured alike). We will focus particularly on recent work showing that the problem of determining whether there is a path of length l between two given k-colourings is fixed-parameter tractable for all k (when parameterized by l), polynomial-time solvable for k=3, and has no polynomial kernel for all k≥4 (unless the polynomial hierarchy collapses).

(This is joint work with Dieter Kratsch, Stefan Kratsch, Viresh Patel and Daniël Paulusma.)

Tue 28 Oct 2014
15:00 in E245
Rahul Savani (Liverpool)
The Complexity of the Simplex Method

The simplex method is a well-studied and widely-used pivoting method for solving linear programs. When Dantzig originally formulated the simplex method, he gave a natural pivot rule that pivots into the basis a variable with the most violated reduced cost. In their seminal work, Klee and Minty showed that this pivot rule takes exponential time in the worst case. We prove two main results on the simplex method. Firstly, we show that it is PSPACE-complete to find the solution that is computed by the simplex method using Dantzig’s pivot rule. Secondly, we prove that deciding whether Dantzig’s rule ever chooses a specific variable to enter the basis is PSPACE-complete. We use the known connection between Markov decision processes (MDPs) and linear programming, and an equivalence between Dantzig’s pivot rule and a natural variant of policy iteration for average-reward MDPs. We construct MDPs and show PSPACE-completeness results for single-switch policy iteration, which in turn imply our main results for the simplex method.

Joint work with John Fearnley.

Tue 4 Nov 2014
15:00 in E245
Kitty Meeks (Glasgow)
Parameterised Subgraph Counting Problems

Parameterised Counting Complexity is a relatively new field (first formalised in 2004 by Flum and Grohe), which combines ideas from the much better established areas of parameterised complexity and counting complexity. Most results in this area so far are concerned with subgraph counting problems: given a graph G on n vertices, we wish to determine the number of k-vertex subgraphs of G that have some particular property. Such problems can clearly be answered by exhaustive search in time O(nk), so the goal is to determine whether, for certain types of property, the problem is fixed parameter tractable with respect to the parameter k; for those problems believed to be intractable from this the point of view, it is natural to ask whether they can nevertheless be approximated efficiently.

I will begin by introducing parameterised subgraph counting problems in more detail, and give a survey of the state of the art in this area, before illustrating some of the techniques used by describing some recent joint work with Mark Jerrum (QMUL). Only a very basic knowledge of the relevant complexity theory will be assumed!

Tue 11 Nov 2014
15:00 in E245
Alina Ene (Warwick)
Approximation Algorithms for Multiway Partitioning Problems and a Simplex Coloring Conjecture

We consider several problems where the goal is partition a ground set into several pieces while minimizing a “cut-type” objective function; examples include Multiway Cut, Node-weighted Multiway Cut, Metric Labeling and Hypergraph Labeling. A natural LP relaxation gives an optimal approximation for these problems, assuming the Unique Games Conjecture (the UGC assumption can be removed for certain submodular generalizations of these problems). However, we do not know how to round this LP in general and the focus has been on understanding this LP for specific problems. In this talk, we describe several rounding strategies and an integrality gap construction that leads to a simplex coloring conjecture reminiscent of Sperner’s Lemma.

This talk is based on joint work with Chandra Chekuri (UIUC), Huy Nguyen (Simons Institute Berkeley), Jan Vondrak (IBM Research Almaden), and Yi Wu (Google).

Tue 18 Nov 2014
15:00 in E245
Hannes Uppman (Linkoeping, Sweden)
On the Complexity of Some Valued Constraint Satisfaction Problems

Many combinatorial optimization and feasibility problems can be expressed as a valued constraint satisfaction problem (VCSP). The VCSPs include for example every constraint satisfaction problem (e.g. 3-SAT) and every maximum constraint satisfaction problem (e.g. Max-Cut). Recently a complete characterization of all finite-valued VCSPs that can be solved in polynomial-time (assuming P != NP) has been obtained [Thapper-Zivny FOCS’12, Kolmogorov ICALP’13, Thapper-Zivny STOC’13]. However, VCSPs that are not finite-valued are less understood. In this talk I will present work on the complexity of a particular type of VCSPs known as extended minimum cost homomorphism problems. These are VCSPs that in some sense may be thought of as “far from finite-valued”.

We will focus especially on a subclass of the extended minimum cost homomorphism problems that for example contains every minimum solution problem. For any problem (on any finite domain) in this class we will see that one can either prove NP-hardness or the existence of quite a lot of “useful structure” in the instances of the problem. We apply this to prove a dichotomy on small domains.

Tue 25 Nov 2014
15:00 in E245
George Mertzios (Durham)
Determining majority in networks with local interactions and very small local memory

We study here the problem of determining the majority type in an arbitrary connected network, each vertex of which has initially two possible types (states). The vertices may have a few additional possible states and can interact in pairs only if they share an edge. Any (population) protocol is required to stabilize in the initial majority, i.e. its output function must interpret the local state of each vertex so that each vertex outputs the initial majority type.

We first provide a protocol with 4 states per vertex that always computes the initial majority value, under any fair scheduler. Under the uniform probabilistic scheduler of pairwise interactions, we prove that our protocol stabilizes in expected polynomial time for any network and is quite fast on the clique. As we prove, this protocol is optimal, in the sense that there does not exist any population protocol that always computes majority with fewer than 4 states per vertex.

However this does not rule out the existence of a protocol with 3 states per vertex that is correct with high probability (whp). To this end, we examine an elegant and very natural majority protocol with 3 states per vertex, introduced in [Angluin et al., Distr. Comp., 2008] where its performance has been analyzed for the clique graph. In particular, it determines the correct initial majority type in the clique very fast and whp under the uniform probabilistic scheduler. We study the performance of this protocol in arbitrary networks. We prove that, when the two initial states are put uniformly at random on the vertices, the protocol of Angluin et al. converges to the initial majority with probability higher than the probability of converging to the initial minority.

In contrast, we present an infinite family of graphs, on which the protocol of Angluin et al. can fail, i.e. it can converge to the initial minority type whp, even when the difference between the initial majority and the initial minority is n-Θ(ln(n)). We also present another infinite family of graphs in which the protocol of Angluin et al. takes an expected exponential time to converge. These two negative results build upon a very positive result concerning the robustness of the protocol of Angluin et al. on the clique, namely that if the initial minority is at most n/7, the protocol fails with exponentially small probability. Surprisingly, the resistance of the clique to failure causes the failure in general graphs. Our techniques use new domination and coupling arguments for suitably defined processes whose dynamics capture the antagonism between the states involved.

Tue 2 Dec 2014
15:00 in E245
Eleni Akrida (Liverpool)
Ephemeral Networks with Random Availability of Links: Diameter and Connectivity

In this work we consider temporal networks, the links of which are available only at discrete moments in time randomly selected from a set of natural numbers. Our networks are ephemeral, meaning their links appear sporadically, only at certain times, within a given maximum time. The labels of an edge indicate the moments in time at which the edge is available. In such networks, information has to follow temporal paths, i.e., paths, the edges of which are assigned a strictly increasing sequence of labels. We first examine a very hostile network: the clique of n vertices, each edge of which is known to be available only one random moment in the time period {1,2,…,n}. We show that information dissemination is very fast with high probability even in this hostile network with regard to availability.

We, then, consider the least number, r, of random moments in time at which an edge is available, in order to guarantee at least a temporal path between any pair of vertices of the network. We show that r is Ω(log n) even for some networks of diameter 2. Finally, we compare this cost to an optimal deterministic allocation of labels of availability that guarantees a temporal path between any pair of vertices. For this reason, we introduce the notion of the Price of Randomness and we show an upper bound for general networks.

Finally, given an instance of the above hostile clique, we examine the problem of maintaining temporal paths from every vertex to every other vertex using fewer (random) time labels and show that we can reduce the total number of labels to 2n+O(log n).

Tue 9 Dec 2014
15:00 in E245
Laszlo Vegh (LSE)
A strongly polynomial algorithm for generalised flow maximisation

Generalised flows are a classical extension of network flows, where the flow gets multiplied by a certain gain or loss factor while traversing an arc. This is a widely applicable model, as the factors can be used to model physical changes such as leakage or theft; or alternatively, they can represent conversions between different types of entities, e.g. different currencies. The talk presents the first strongly polynomial algorithm for the generalized flow maximisation problem. This used to be the simplest class of linear programmes with no strongly polynomial algorithm known, and finding one has been a longstanding open question. The algorithm is based on a new variant of the classical scaling technique, called continuous scaling.

Term 2
Tue 20 Jan 2015
15:00 in E245
Maximilien Gadouleau (Durham)
Universal sequential boolean automata networks

A boolean automata network (BAN) is a set of n boolean variables (automata) which evolve over time according to a deterministic function: the value of those variables is called the state of the network. The BAN is sequential if at each time step only one automaton updates its state. At each time step, the state of the network is a function of its initial state; we then say that the network computes that function. We are then interested in finding universal sequential BANs, which can compute any possible boolean function of n variables. We determine almost tight bounds on the minimum time and the minimum size of such a universal network.

Tue 27 Jan 2015
15:00 in E245
Martin Milanič (Primorska, Slovenia)
On the structure of 1-perfectly orientable graphs

We study the class of 1-perfectly orientable (1-p.o.) graphs, that is, graphs having an orientation in which every out-neighborhood induces a tournament. 1-p.o. graphs form a common generalization of chordal graphs and circular arc graphs. Even though 1-p.o. graphs can be recognized in polynomial time, little is known about their structure. We prove several structural results about 1-p.o. graphs and characterizations of 1-p.o. graphs in special graph classes. This includes: (i) a characterization of 1-p.o. graphs in terms of edge clique covers, (ii) identification of several graph transformations preserving the class of 1-p.o. graphs, (iii) a complete characterization of 1-p.o. cographs and of 1-p.o. complements of forests, and (iv) an infinite family of minimal forbidden induced minors for the class of 1-p.o. graphs. Joint work with Tatiana Romina Hartinger.

Tue 3 Feb 2015
15:00 in E245
Andrew Treglown (Birmingham)
Perfect packings in graphs and directed graphs

We say that a graph G has a perfect H-packing if there exists a set of vertex-disjoint copies of H which cover all the vertices in G. For those H not corresponding to a matching, the decision problem whether a graph G contains a perfect H-packing is NP-complete. Thus, there has been significant interest in obtaining sufficient conditions that force a perfect H-packing. The seminal Hajnal-Szemerédi theorem states that every graph G whose order n is divisible by r and whose minimum degree is at least (1-1/r)n contains a perfect Kr-packing. It is easy to see that the minimum degree condition here is best-possible.

In this talk I will discuss a number of recent results in the area including a degree sequence analogue of the Hajnal-Szemerédi theorem and generalisations of the Hajnal-Szemerédi theorem to the directed graph setting.

Tue 10 Feb 2015
15:00 in E245
Andrei Krokhin (Durham)
On constant-factor approximable Min CSPs

We study the approximability of Minimum Constraint Satisfaction Problems (Min CSPs), where one is given a collection of constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to minimise the number of violated constraints. Constraints are usually given by relations, and we study how the approximability of Min CSP depends on the relations that are allowed to appear in constraints. In this talk, we will focus on characterising Min CSPs that admit a constant-factor approximation algorithm.

This is joint work with Victor Dalmau (UPF Barcelona) and Rajsekar Manokaran (IIT Madras).

Tue 17 Feb 2015
15:00 in E245
Artur Czumaj (Warwick)
Testing Cluster Structure of Graphs

Cluster analysis is a fundamental task in data analysis that aims at partitioning a set of objects into a disjoint collection of objects with similar characteristics. In this talk, we will use the concept of conductance to measure the quality of cluster structure and will focus on a question of approximately recognizing cluster structure of a graph in sublinear time in the framework of property testing in the bounded degree model. We show how to test in O*(sqrt(n)) time whether a graph with nodes can be partitioned into no more than k parts (clusters) such that the outer-conductance of each cluster is phio at most and the inner-conductance of the induced subgraph on each cluster is at least phii, for a large spectrum of parameters k, phio, phii. By the lower bound of Ω(sqrt(n)) for testing graph expansion, which corresponds to the case when k=1 in our problem, our algorithm is asymptotically optimal up to polylogarithmic factors. This is joint work with Pan Peng and Christian Sohler.

Tue 24 Feb 2015No seminar
Tue 3 Mar 2015
15:00 in E245
Elias Koutsoupias (Oxford)
Duality and optimality of multidimensional auctions

We develop a general duality-theory framework for revenue maximization in additive Bayesian auctions. The framework extends linear programming duality and complementarity to constraints with partial derivatives. The dual system reveals the geometric nature of the problem and highlights its connection with the theory of bipartite graph matchings. We demonstrate the power of the framework by applying it to a multiple-good monopoly setting where the buyer has uniformly distributed valuations for the items, the canonical long-standing open problem in the area. And we give optimal auctions for a large class of distributions for two items. The duality framework is used not only for proving optimality, but perhaps more importantly, for deriving the optimal mechanism itself.

Tue 10 Mar 2015
15:00 in E245
Paul Spirakis (Liverpool)
Distributed Stable Network Construction

We study here protocols so that populations of finite-state agents can construct networks. All such agents begin from the same initial state and all execute the same protocol. Agents can only interact in pairs (such pair-wise meetings are scheduled by a fair scheduler). When two agents interact, the protocol takes as input the states of the two agents and the state of the connection between them (on/off) and updates all of them. Initially all connections are off. The goal of such protocols is to eventually let the system stabilize to a desired graph (network). We give protocols and lower bounds for several basic network construction problems. For some of them we analyse the expected time to convergence under the assumption of a uniform random fair scheduler. Finally, we show some universality results.

This is joint work with Othon Michail and has appeared in ACM PODC 2014.

Term 3
Tue 21 Apr 2015
15:00 in E245
Toby Cubitt (Cambridge)
Complexity classification of local Hamiltonian problems

The “k-local Hamiltonian problem” is the natural quantum analogue of classical constraint satisfaction problems — specifically, it is the quantum generalisation of MAXSAT. This problem has become one of the most important and widely studied problems in quantum complexity theory. Partly because it plays a similar foundational role for quantum complexity theory as SAT does for (classical) complexity theory. But also because it formalises the problem of calculating ground-state energies of physical systems, one of the key challenges in condensed matter physics.

One way of making the problem more physically meaningful is to restrict the Hamiltonian in question by picking its terms from a fixed set S. (The quantum generalisation of picking boolean constraints from some fixed set.) The Heisenberg and Ising models from condensed-matter physics are important examples of such special cases.

We prove a complete characterisation of the complexity of this problem for all 2-local qubit Hamiltonians. Depending on the subset S, the problem falls into one of the following categories:

  • in P
  • NP-complete
  • polynomial-time equivalent to the Ising model (now known to be equivalent to an important quantum complexity class known as stoqMA)
  • QMA-complete (i.e. complete for the quantum analogue of NP)

This result is a quantum analogue of Schaefer’s dichotomy theorem for boolean constraint satisfaction problems.

I will introduce quantum complexity theory and the k-local Hamiltonian problem, explain our result, and sketch the proof techniques. (Assuming little or no prior knowledge of quantum anything!)

Based on joint work with Ashley Montanaro, in FOCS 2014.

 

Tue 28 Apr 2015
15:00 in E245
Alejandro Erickson (Durham)
Graphs are at the heart of the cloud

The explosive growth of online services powered by data centres (web search, cloud computing, etc.) has motivated intense research into data centre network (DCN) design over the past decade. Computational demands are testing the limits of current engineering capabilities, and a new field in theoretical research has emerged with its roots in the area of traditional interconnection networks; adaptations of well-understood topologies such as generalized hypercubes, fat-trees, random regular graphs, WK-recursive networks, butterfly networks, as well as topologies geared explicitly towards DCNs, have all been proposed as DCN topologies in the last 7 years. Along with these custom-built (graph) topologies comes the need for theoretical analysis in scenarios that can be radically different from those ordinarily expected in traditional networks. I will give an overview of this emerging field.

I will then present a generic method of adapting a suitably chosen graph G to build a server-centric DCN topology G, in such a way that any good networking properties of G can be preserved. In particular, routing algorithms on G can be used in G, and when G is regular, the bisection width of G (a well known throughput metric) can be obtained from the solution to an edge-isoperimetric problem on G.

Tue 5 May 2015
15:00 in E245
Graham Cormode (Warwick)
Trusting the Cloud with Practical Interactive Proofs

When handling large quantities of data, it is often necessary to outsource the computational effort to a third party: this captures current efforts in cloud computing, but also scenarios within trusted computing systems and inter-organizational data sharing. When the third party is not fully trusted, it is important to give assurance that the computation has been performed correctly. This talk presents some recent results in designing new protocols for verifying computations which are streaming in nature: the verifier (data owner) needs only a single pass over the input, storing a sublinear amount of information, and follows a simple protocol with a prover (service provider) that takes a small number of rounds. A dishonest prover fools the verifier with only vanishingly small probability, while an honest prover’s answer is always accepted. Starting from basic aggregations, interactive proof techniques allow a quite general class of computations to be verified, leading to practical implementations.