Skip to main content


Seminars from previous years: 2015/16 2014/15 2013/14  2012/13  2011/12  2010/11

See also NODES activities.

ACiD Seminars 2016/17
Term 1
Thu 20 Oct 2016
12:00 in E245
Vadim Lozin (Warwick)
Breaking the boundaries: from structure to algorithms

Finding a maximum independent set in a graph is an NP-hard problem. However, restricted to the class of line graphs this problem becomes polynomial-time solvable due to the celebrated matching algorithm of Jack Edmonds. What makes the problem easy in the class of line graphs and what other restrictions can lead to an efficient solution? To answer these questions, we employ the notion of boundary classes of graphs. In this talk, we shed some light on the structure of the boundary separating difficult instances of the problem from polynomially solvable ones and analyze tools to break it. We also discuss similar questions with respect to some other algorithmic problems.

Mon 24 Oct 2016
16:00 in E245
Barny Martin (Durham)
The Complexity of Quantified Constraints

We elaborate the complexity of the Quantified Constraint Satisfaction Problem, QCSP(A), where A is a finite idempotent algebra. Such a problem is either in NP or is co-NP-hard, and the borderline is given precisely according to whether A enjoys the polynomially-generated powers (PGP) property. This reduces the complexity classification problem for QCSPs to that of CSPs, modulo that co-NP-hard cases might have complexity rising up to Pspace-complete. Our result requires infinite languages, but in this realm represents the proof of a slightly weaker form of a conjecture for QCSP complexity made by Hubie Chen in 2012. The result relies heavily on the algebraic dichotomy between PGP and exponentially-generated powers (EGP), proved by Dmitriy Zhuk in 2015, married carefully to previous work of Chen.

Mon 31 Oct 2016
16:00 in E245
Andre Nichterlein (Durham)
On parameterized algorithms for polynomial-time solvable problems

Parameterized complexity analysis is a flourishing field dealing with the exact solvability of “intractable” problems. Appropriately parameterizing polynomial-time solvable problems helps reducing unattractive polynomial running times. In particular, this “FPT in P” approach sheds new light on what makes a problem far from being solvable in linear time, in the same way as classical FPT algorithms help in illuminating what makes an NP-hard problem far from being solvable in polynomial time. Surprisingly, this very interesting research direction has been too little explored so far; the known results are rather scattered and do not systematically refer to or exploit the toolbox of parameterized algorithm design.
In this talk I outline known results, explain some of the corresponding techniques, and highlight similarities and differences to the classical design of parameterized algorithms for NP-hard problems.

Mon 7 Nov 2016
16:00 in E245
Argyris Deligkas (Liverpool)
The complexity of Greedy Matching

Motivated by the fact that in several cases a matching in a graph is stable if and only if it is produced by a greedy algorithm, we study the problem of computing a maximum weight greedy matching on weighted graphs, termed GreedyMatching. In wide contrast to the maimum weight matching problem, for which many efficient algorithms are known, we prove that GreedyMatching is strongly NP-hard and APX-complete, and thus it does not admit a PTAS unless P=NP, even on graphs with maximum degree at most 3 and with at most three different integer edge weights. Furthermore we prove that GreedyMatching is strongly NP-hard if the input graph is in addition bipartite. Moreover we consider two natural parameters of the problem, for which we establish a sharp threshold behavior between NP-hardness and tractability. On the positive side, we present a randomized approximation algorithm (Rgma) for GreedyMatching on a special class of weighted graphs, called bush graphs. We highlight an unexpected connection between Rgma and the approximation of maximum cardinality matching in unweighted graphs via randomized greedy algorithms. We show that, if the approximation ratio of Rgma is ρ, then for every ε > 0 the randomized Mrg algorithm gives a (ρ − ε)-approximation for the maximum cardinality matching. We conjecture that a tight bound for ρ is 2 ; we prove our conjecture true for two subclasses of bush graphs. Proving a tight bound for the approximation ratio of Mrg on unweighted graphs (and thus also proving a tight value for ρ) is a long-standing open problem. This unexpected relation of our Rgma algorithm with the Mrg algorithm may provide new insights for solving this problem. This is joint work with George Mertzios and Paul Spirakis

Mon 14 Nov 2016
16:00 in E245
Wolfram Bentz (Hull)
Synchronisation for Automata, Semigroups, and Groups

Synchronisation is a property of automata and can be understood as a method of error recovery. An automaton is synchronising if there is an input sequence which always brings the automaton into a known state irrespectively of the original state of the automaton. Such a sequence is called a reset (or synchronising) word.
We can translate this question into the realm of semigroup theory by asking if the transition semigroup associated to the automaton contains a constant map. Similarly, we say a permutation group is synchronising if it generates a synchronising semigroup together with any non-invertible transformation.
Primitivity of a group is a strong property that is required for synchronisation and “usually´´ also forces synchronisation. Our work examines how usual this situation actually is.
Our results use an auxiliary graph construction that enables us to approach the problem by combinatorial methods.
This is a joint work with Joao Araujo (CEMAT, University of Lisbon), Peter J. Cameron (Mathematical Institute, University of St Andrews), Gordon Royle (Centre for the Mathematics of Symmetry and Computation, University of Western Australia), and Artur Schaefer (Mathematical Institute, University of St Andrews).

Mon 21 Nov 2016
16:00 in E245
Nic Georgiou (Durham Maths)
Winkler’s hat game on graphs

Hat games are a popular topic in combinatorics with connections to coding theory, network coding and auctions. In a typical hat game, a team of $n$ players play against a single adversary. The adversary places a coloured hat (from a choice of $q$ different colours) on each player and each player tries to guess the colour of the hat they are wearing by looking at the colours of the hats worn by some of the other players.
Many variations on this basic theme exist; we consider the game where all players must guess simultaneously, and the team wins if they have a deterministic strategy where at least one person guesses correctly for each one of the $q^n$ possible configurations of hats. Each player only sees a subset of the hats, as determined by a graph: each vertex in the graph represents a player and each player sees only the hats of his/her neighbours in the graph. If there is a winning strategy for the graph $G$ when the game is played with $q$ different colours, we say that $G$ is $q$-solvable.
I will present some constructions for general $q$ which yield two classes of $q$-solvable graphs:
i) bipartite, but large, graphs (i.e., number of vertices exponential in $q$);
ii) graphs with largest clique of size $\epsilon q$, and number of vertices linear in $q$, for any $\epsilon > 0$;
and discuss some related open problems.
This is joint work with Maximilien Gadouleau.

Mon 28 Nov 2016
16:00 in E245
Victor Dalmau (University Pompeu Fabra, Barcelona)
Approximation of MIN CSP

An instance of the constraint satisfaction problem (CSP) is given by a family of constraints on overlapping sets of variables, and the goal is to assign values from a fixed domain to the variables so that all constraints are satisfied. In the optimization version, the goal is to maximize the number of satisfied constraints (MAX CSP) or, alternatively, to minimize the number of unsatisfied constraints (MIN CSP). This problem is usually parameterized by the set, Gamma, of relations allowed in the constraints, usually called constraint language. It turns out that MAX CSP/MIN CSP is computationally hard for most constraint languages, which motivates the study of approximation algorithms. In this talk we will focus on the approximation of MIN CSPs. We shall start addressing the following question: which constraint languages give rise to a MIN CSPs that is constant-factor approximable? We shall also study some other weaker approximation notions such polynomial loss and robust approximation.

Mon 5 Dec 2016
16:00 in E245


Mon 12 Dec 2016
16:00 in E245
Stefano Giani (Durham Engineering)
Why mesh adaptivity?

Mesh adaptivity is becoming a very useful tool to improve the accuracy of finite element simulations. The general idea behind mesh adaptivity is to enrich the finite element space where it is necessary to reduce the error and introducing the smallest number of extra degrees of freedom. In many cases the most advanced adaptive techniques can achieve exponential convergence to the right solution. In the first part of this talk we would like to give an overview of current mesh adaptivity techniques. In the second part we present a different kind of adaptivity based on aggregation that can be used to solve efficiently problems on complicated domains. In the third and final part we explore how finite element aggregation can be used to construct preconditioners.

Term 2
Mon 16 Jan 2017
16:00 in E245
Cedric Gerot (Grenoble)
When can Box-spline subdivision be factorised into elementary steps?

Box-splines are local, possibly anisotropic, piecewise polynomial functions whose integer translates make up a partition of unity. They also satisfy a refinement equation yielding to a subdivision scheme which, if it can be factorised into elementary steps, can be implemented in-place, and be the prediction operator of a biorthogonal multiresolution framework with Box-splines as scaling functions. However, every Box-spline cannot be factorised into elementary steps. We give a necessary and sufficient condition and a sketch of proof which involves linear and commutative algebra.

Mon 23 Jan 2017
16:00 in E245
Isolde Adler (Leeds)
Testing properties of sparse relational structures

Property testing (for a property P) asks for a given input, whether it has property P, or is “far” from having that property. A “testing algorithm” is a probabilistic algorithm that answers this question with high probability correctly, by only looking at a constant number of small parts of the input.
Testing algorithms are thought of as “extremely efficient”, making them relevant in the context of big data.
Recently, Newman and Sohler proved that every property of “hyperfinite” graphs is testable in the bounded degree model. In this paper we introduce property testing for relational structures of bounded degree, and we study a variant of the bounded degree model that additionly requires the testing algorithm to be uniform in the size of the input.
We show that on relational structures of bounded degree and bounded tree-width, each property that can be defined in monadic second order logic is uniformly testable with polylogarithmic running time.
This is joint work with Frederik Harwath.

Wed 25 Jan 2017
13:00 in E360
Kitty Meeks (Glasgow)
The complexity of finding and counting sum-free subsets

A set A of natural numbers is said to be sum-free if it does not contain distinct x, y and z such that x + y = z. Sum-free sets have been studied extensively in additive combinatorics (Paul Erdős was particularly interested in these sets) but algorthmic questions relating to sum-free sets have thus far received very little attention. We consider the problem, given a set A, of determining whether A contains a sum-free subset of size at least k. We show that this problem is NP-complete in general, but is tractable with respect to certain parameterizations; in the cases where the decision problem is tractable, we also consider the complexity of counting all sum-free subsets of size exactly k. This is joint work (in progress) with Andrew Treglown (University of Birmingham).

Mon 30 Jan 2017
16:00 in E245
Viktor Zamaraev (Warwick)
On forbidden induced subgraphs for unit disk graphs

A unit disk graph is the intersection graph of disks of equal radii in the plane. The class of unit disk graphs is hereditary, and therefore can be characterized in terms of minimal forbidden induced subgraphs. In spite of quite active study of unit disk graphs very little is known about minimal forbidden induced subgraphs for this class. We found only finitely many minimal non unit disk graphs in the literature. In this paper, we study in a systematic way forbidden induced subgraphs for the class of unit disk graphs. We develop several structural and geometrical tools, and use them to reveal infinitely many new minimal non unit disk graphs. Further, we use these results to investigate the structure of co-bipartite unit disk graphs. In particular, we give a structural characterization of those co-bipartite unit disk graphs whose edges between parts form a C_4-free bipartite graph and show that bipartite complements of these graphs are also unit disk graphs. Our results lead us to propose a conjecture that the class of co-bipartite unit disk graphs is closed under bipartite complementation.
Based on joint work with Aistis Atminas.

Wed 8 Feb 2016
14:00 in E245
Maciej Koutny (Newcastle)
Synthesis of Petri Nets with Whole-place Operations and Localities

Synthesising systems from behavioural specifications is an attractive way of constructing implementations which are correct-by-design and thus requiring no costly validation efforts. In this talk, systems are modelled by Petri nets and the behavioural specifications are provided in the form of step transition systems, where arcs are labelled by multisets of executed actions. We focus on the problem of synthesising Petri nets with whole-place operations and localities, which are a class of Petri nets powerful enough to express a wide range of system behaviours, including inhibition of actions, resetting of local states, and locally maximal executions.

Mon 13 Feb 2017
16:00 in E245
Iddo Tzameret (Royal Holloway)
Algebraic Proof Complexity

This talk is dedicated to emerging connections between proof-size lower bounds on strong propositional proof systems such as Frege and lower bounds on the size of algebraic circuits.
First, I will show that lower bounds on Frege proofs follow from certain size lower bounds on a fairly weak model of computation, namely, non-commutative algebraic formulas. For this weak model of computation, many strong size lower bounds are already known since the early 90s. The argument is a characterization of propositional proofs as non-commutative formulas.
Then, I will discuss how lower bounds techniques from algebraic circuit complexity yield almost directly lower bounds on the proof-size of fairly strong systems such as Nullstellensatz and the Ideal Proof System, when refutations in both systems are written as algebraic circuits taken from some restricted circuit classes.
I will conclude with some open questions related to advancing the frontiers of algebraic proof complexity.
Based on joint works with Li-Wang and Forbes-Shpilka-Wigderson.

Mon 20 Feb 2017
16:00 in E245
Edouard Bonnet (Middlesex)
Fine-grained complexity of coloring geometric intersection graphs

We are interested in the following problem: Given a collection of n objects in the plane and an integer k, is there a proper k-coloring of its intersection graph (i.e., such that two overlapping objects get different colors). Here we think of k as a function of n; it can be constant, linear, or anything in between. From an application of a known separator theorem, if the input objects are fat (i.e., have bounded diameter/width ratio), then this problem can be solved in time 2^{O(\sqrt{n k} \log n)}. We will show that this running time is essentially optimal under the Exponential Time Hypothesis (ETH). We then move to intersection graphs of segments (which are quintessentially non fat) and show that the situation is quite different than with fat objects. We end on a surprising note: 3-coloring and 4+-coloring of segments have significantly different behaviors. The guideline and motivation of the talk is to go beyond the NP-hardness of coloring those geometric graphs, and precise what is the best (hopefully subexponential) running time that we can get. This is based on a joint work with Csaba Biró, Dániel Marx, Tillmann Miltzow, and Paweł Rzążewski, and an ungoing project with Stéphan Thomassé and a subset of the aforementioned authors.

Mon 27 Feb 2017
16:00 in E245
O-joung Kwon (TU Berlin, Germany)
Algorithmic perspective of rank-width

Rank-width is a width parameter introduced by Oum and Seymour in 2004. Graphs of bounded rank-width capture graphs that can be recursively decompose along vertex partitions where the number of all neighbourhood types from one set to the other is small. Recently, algorithmic applications extending results for tree-width have been more intensively studied. In this talk, we survey some of interesting topics related to rank-width.

Mon 6 Mar 2017
16:00 in E245
Mark Fricker (Oxford)
Can graph theory help to understand biological transport networks?

Physical transport networks occur at many different scales in biological systems, ranging from the endoplasmic reticulum at a sub-cellular level, vascular networks in leave, plants and animals, to complete organisms spanning hectares in the case of woodland fungi. Recently image analysis tools have been developed to extract these networks as weighted graphs, typically with a few thousand nodes and edges, with each edge associated with a length and cross-sectional area. In the case of fungi and slime molds, these graphs change over time as the organisms grows and responds to variation in the environment. In some instances, the network can be modelled as a hydraulically coupled system that follow biophysical scaling laws, but in many cases, researchers are looking for graph-theoretic measures that may capture useful biological features, or draw on understanding of graph behaviour in other systems to predict behaviour in the living system. Nevertheless, as the biological graphs are embedded in low spatial dimensions (2D or 3D), much of the theoretical work based on topology of un-weighted graphs and graph-generating models does not seem helpful. Likewise, there seems to be a paucity of generative models for weighted graphs in 2-D and 3-D that can be systematically explored or provide useful null models. Hence the question of what principles from graph theory are appropriate in this context.

Mon 13 Mar 2017
16:00 in E245
Russell Martin (Liverpool)
Perpetually Dominating Large Grids

In the Eternal Domination game, a team of guard tokens initially occupies a dominating set on a graph G. A rioter then picks a node without a guard on it and attacks it. The guards defend against the attack: one of them has to move to the attacked node, while each remaining one can choose to move to one of his neighboring nodes. The new guards’ placement must again be dominating. This attack-defend procedure continues perpetually. The guards win if they can eternally maintain a dominating set against any sequence of attacks, otherwise the rioter wins. We study rectangular grids and provide the first known general upper bound for these graphs. Our novel strategy implements a square rotation principle and eternally dominates m x n grids by using approximately mn/5 guards, which is asymptotically optimal even for ordinary domination. Joint work with Ioannis Lamprou and Sven Schewe.