5 July 2010
14 June 2010
12 February 2010
A quick tour of constraints-related papers, this time from STOC 2010. (more…)
9 March 2009
STACS 2009 was held 26 to 28 February 2009 in Freiburg, Germany. Several papers are relevant to constraints, here are rather biased synopses of two of these.
Let ECSP(S,T) be the problem instance that requires generating all homomorphisms from S to T. For classes A and B of relational structures, all with the same finite signature, let ECSP(A,B) be the problem consisting of all instances ECSP(S,T) where S is in A and T is in B. (In particular, the fixed finite signature means that only constraint satisfaction problems of bounded arity are dealt with here.) In general ECSP has exponentially many solutions, so of interest are A and B that lead to a procedure to generate all solutions in time that is polynomial in the total size of the input and the output. The main focus of this paper are conditions that guarantee that solutions can be computed with polynomial delay (WPD), which guarantees polynomial total time.
Lemma 3.3 is interesting: for a fixed core T, and letting be the class of disjoint unions of structures S and T where there is a homomorphism from S to T, the decision problem CSP(_,T) is in PTIME iff the problem of finding a solution SCSP(,_) is in PTIME. Lemma 3.4 similarly relates SCSP and ECSP: for a class A of relational structures, let A’ be the class of structures constructed from A by adding |A| new independent vertices to each structure; then SCSP(A,_) is in PTIME iff ECSP(A’,_) is in PTIME. Theorem 4.10 states that if A is a class of structures such that the k-core of each structure in A has tree width at most k, then ECSP(A,_) is solvable WPD. Let CQE(A,B) be the problem of enumerating all extendible partial mappings from a given subset of the vertices of the given source structure S from A to the given target structure T from B, in the sense that the partial mapping can be extended to a homomorphism from S to T. For a particular class A whose 2-cores have bounded width, Example 6.2 shows that if CQE(A,_) is solvable WPD, then W = FPT.
This paper deals with unbounded arity problems which are represented as truth tables. Most of the work on complexity of constraint satisfaction has focused on listing all tuples in each relational structure in the instance. A truth table representation is a generalized adjacency matrix: instead of listing a tuple, a 1 represents its presence. For sparse problems truth tables are therefore very large, but for dense problems they are reasonable. The main result is Theorem 1.5: assuming a conjecture of the author, CSP(A,_) is tractable iff the class of hypergraphs of structures in A has bounded adaptive width. Adaptive width is a new measure based on fractional independent sets that (Proposition 5.1) is at most as large as fractional hypertree width. Previously fractional hypertree width was the most general width measure known that guaranteed tractability.
1 August 2008
Many of the talks at the main conference were interesting; just a few are outlined here.
Flener, Pearson: Solving Necklace Constraint Problems
Pierre Flener showed how standard constraint satisfaction techniques could be applied to model an enumeration problem in discrete mathematics, without needing any complicated special purpose algorithms. However, the necklace constraints used in such problems have nice applications in scheduling, and fast propagators can be built.
Toby Walsh discussed how SLIDE can be used as a building block for many other global constraints (including REGULAR and CARDPATH, which themselves generalize many others). The claim is that SLIDE can be as efficient as specialized propagators for sequencing problems, even though propagation for such a general constraint is NP-complete. When SLIDE is used to encode tractable constraints in the right way, there is a lot of regularity in the encoding, which then yields tractable propagation. Using an intractable propagator in a tractable way is an interesting idea that I feel it should be possible to formalize. Experimental results are presented in support, but the theory in Section 6 seems further development to constitute a rigorous argument. Of course, 5 pages aren’t really enough to do the topic justice, and I look forward to the journal version. (By the way, coding SLIDE in Tailor‘s implementation of Essence’ is proving quite tricky — the lack of macros makes it rather difficult… The unbounded arity of some of these constraints also means that the Global Constraint Catalog doesn’t list them either!)
Samadi, Schaeffer, Torabi Asr, Samar, Azimifar: Using Abstraction in Two-Player Games
Mehdi Samadi demonstrated that it is possible to abstract from game positions, for instance by removing some pawns from a chess endgame. By dropping some pieces from the position one may end up with a position close enough to endgame databases so that the gain in accuracy is more important than the inaccuracy of abstraction. Quite convincing experimental evidence was presented that pure abstraction outperformed Crafty.
Hoche, Flach, Hardcastle: A Fast Method for Property Prediction in Graph-Structured Data from Positive and Unlabelled Examples
Peter Flach spoke about using an iterative neighbourhood majority voting scheme to classify vertices in a graph. This was apparently faster and not worse than a Markov chain approach. I think this could be used to weakly tag new objects by pre-populating the suggested tags, which would potentially encourage tagging, instead of presenting an empty tag cloud to the first tagger.
There were several talks about applications of BDDs to various problems. Donald Knuth‘s statement in a talk last year (that BDDs were at the same stage as linked lists were before everyone started using them as a basic data structure in the 1960s) seems more and more on the mark. There is something in the air…
I missed Adrian Petcu‘s talk on his prize-winning dissertation, but a brief outline conveyed on the bus back to the hotel one evening is that 1) communication overhead can swamp the benefit of distributing search, but 2) there are ways of grouping messages to achieve good speedup.
A tiring week, but fun and stimulating.
I attended ECAI 2008 in Patras, Greece last week. The conference was rather large (perhaps even unwieldy), with over 30 workshops, seven concurrent tracks in the main conference, and more than 600 people attending. Due to the excellent turnout from the constraints community, most of my time was spent at talks about constraints and search, with a few talks from other fields to provide some perspective. It was quite easy to speak to people, perhaps because our submission (Cooper, Jeavons, Salamon: Hybrid tractable CSPs which generalize tree structure) unexpectedly received the best paper award. There was a large turnout at my talk, which was a bit daunting!
Before the main conference, several interesting papers were presented at the Workshop on Modeling and Solving Problems with Constraints (proceedings are available online).
Battiti, Campigotto: Penalties may have collateral effects. A MAX-SAT analysis
Paulo Campigotto argued that fiddling with clause weights to smooth out a MAX-SAT fitness surface doesn’t work except to nudge solutions along from a local minimum; there are simply too many side effects to use this technique globally.
Freuder, Wallace, Nordlander: Debugging Constraint Models with Metamodels and Metaknowledge
Richard Wallace discussed a system named Ananke, after the Goddess of Bonds (and therefore an appropriate deity for constraint satisfaction). Given specific solutions that the user of the system expects, Ananke suggests ways to alter an overconstrained specification of a constraint problem so that those solutions become possible. The idea is that this would help find constraints which unnecessarily disallow desired solutions.
Gent, Miguel, Rendl: Common Subexpression Elimination in Automated Constraint Modelling
Andrea Rendl talked about the Tailor system to translate high-level Essence’ specifications of constraint problems into models that can be run using existing constraint programming systems such as Minion. In the translation, it is possible to detect common subexpressions, as a byproduct of parsing and with minimal overhead, resulting in more compact constraint programs and sometimes also much faster search. Even where eliminating subexpressions doesn’t yield any improvement, there is essentially no cost to including it as part of the translation phase.
Makeeva, Szymanek: Revisiting the Generalized Among Constraint
The main point in this tightly argued talk by Polina Makeeva was that it can be faster to use AMONG (if carefully implemented) than to decompose into individual global constraints, since this allows sharing and reuse of information. This kind of result would seem to argue against the traditional, loosely coupled view of constraints interacting purely through a set of domains.
Maher, Narodytska, Quimper, Walsh: Flow-Based Propagators for the SEQUENCE and Related Global Constraints
Nina Narodytska argued that for some global constraints one can use an Integer Linear Programming model if a graph model is not known, while some kinds of ILP models can be transformed into flow problems. If both techniques are applied, one can sometimes find fast flow-based propagators. An example is the SEQUENCE constraint, and an algorithm was presented to enforce domain consistency down a branch of the search tree in time, an factor improvement on the best previous algorithm.