A quick tour of constraints-related papers, this time from STOC 2010. Some of these have been floating around online for a few months, and STOC does not necessarily publish the most important papers. But an interesting sample, nonetheless.

Dániel Marx introduced a hypergraph width measure called submodular width. The exposition relies on the number of variables as the problem parameter, where one is trying to answer a fixed conjunctive query against a large input database. The main result is then that if is a class of hypergraphs of bounded submodular width, then the class of CSP instances where the query has structure in is tractable, while if does not have bounded submodular width then the Exponential Time Hypothesis implies that is not fixed-parameter tractable. However, a key result is that bounded submodular width is the same as bounded adaptive width; adaptive width is a measure introduced last year. So while the technical achievement is impressive, the results are not all that surprising (after reading the paper, of course, so this is said firmly tongue in cheek).

Holger Dell and Dieter van Melkebeek showed that a non-trivial sparsification of SAT would imply a collapse of the polynomial hierarchy. This shows that some of the known kernelizations for fixed-parameter tractable problems cannot be improved unless the polynomial hierarchy collapses.

Ken-ichi Kawarabayashi and Paul Wollan provide a shorter proof for a part of the body of results culminating in the graph minor recognition algorithm (I am guessing at this last part, about this being about recognition, as the paper is not online yet.)

Ola Svensson shows that a version of the unique games conjecture would show that minimizing the makespan for precedence constrained scheduling (with identical machines) cannot be approximated within a factor of for any . This closes the gap with the bound below which the problem is known to be NP-hard, and Ron Graham’s approximation algorithm.

Martin Dyer and David Richerby wrote about dichotomy results for #CSP, the problem of counting the number of solutions of a CSP. Haven’t seen the paper, but I speculate that they may have made more concrete Andrei Bulatov’s dichotomy result from ICALP 2008, which was expressed via algebraic properties.

Paul Beame, Trinh Huynh, and Toniann Pitassi show how to use formulas that require long prooofs of unsatisfiability with tree resolution to generate formulas that are hard for much stronger proof systems. This “hardness amplification” method generalises several previous ad hoc constructions.

Fabian Kuhn, Nancy Lynch, and Rotem Oshman introduced a model of dynamic graphs with the -interval connectivity property. Such a graph maintains a stable connected spanning subgraph for every consecutive rounds, but there is no overall stability condition. The main result is that any computable function can then be computed in rounds with not-too-large messages. I think this model captures the essential behaviour of the Internet rather elegantly, and look forward to more results.

Finally, not really constraints-related, but Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay, and John Watrous showed that QIP = PSPACE (= IP). So adding quantum capabilities to interactive proof systems does not make them more powerful.

For more, Shiva Kintali is posting links to the online versions of the STOC 2010 accepted papers.

(Edit 20100312: add link and clarify contribution of Dell/van Melkebeek.)

### Like this:

Like Loading...

*Related*

## STOC 2010

Tags: conferences, papers

A quick tour of constraints-related papers, this time from STOC 2010. Some of these have been floating around online for a few months, and STOC does not necessarily publish the most important papers. But an interesting sample, nonetheless.

Dániel Marx introduced a hypergraph width measure called submodular width. The exposition relies on the number of variables as the problem parameter, where one is trying to answer a fixed conjunctive query against a large input database. The main result is then that if is a class of hypergraphs of bounded submodular width, then the class of CSP instances where the query has structure in is tractable, while if does not have bounded submodular width then the Exponential Time Hypothesis implies that is not fixed-parameter tractable. However, a key result is that bounded submodular width is the same as bounded adaptive width; adaptive width is a measure introduced last year. So while the technical achievement is impressive, the results are not all that surprising (after reading the paper, of course, so this is said firmly tongue in cheek).

Holger Dell and Dieter van Melkebeek showed that a non-trivial sparsification of SAT would imply a collapse of the polynomial hierarchy. This shows that some of the known kernelizations for fixed-parameter tractable problems cannot be improved unless the polynomial hierarchy collapses.

Ken-ichi Kawarabayashi and Paul Wollan provide a shorter proof for a part of the body of results culminating in the graph minor recognition algorithm (I am guessing at this last part, about this being about recognition, as the paper is not online yet.)

Ola Svensson shows that a version of the unique games conjecture would show that minimizing the makespan for precedence constrained scheduling (with identical machines) cannot be approximated within a factor of for any . This closes the gap with the bound below which the problem is known to be NP-hard, and Ron Graham’s approximation algorithm.

Martin Dyer and David Richerby wrote about dichotomy results for #CSP, the problem of counting the number of solutions of a CSP. Haven’t seen the paper, but I speculate that they may have made more concrete Andrei Bulatov’s dichotomy result from ICALP 2008, which was expressed via algebraic properties.

Paul Beame, Trinh Huynh, and Toniann Pitassi show how to use formulas that require long prooofs of unsatisfiability with tree resolution to generate formulas that are hard for much stronger proof systems. This “hardness amplification” method generalises several previous ad hoc constructions.

Fabian Kuhn, Nancy Lynch, and Rotem Oshman introduced a model of dynamic graphs with the -interval connectivity property. Such a graph maintains a stable connected spanning subgraph for every consecutive rounds, but there is no overall stability condition. The main result is that any computable function can then be computed in rounds with not-too-large messages. I think this model captures the essential behaviour of the Internet rather elegantly, and look forward to more results.

Finally, not really constraints-related, but Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay, and John Watrous showed that QIP = PSPACE (= IP). So adding quantum capabilities to interactive proof systems does not make them more powerful.

For more, Shiva Kintali is posting links to the online versions of the STOC 2010 accepted papers.

(Edit 20100312: add link and clarify contribution of Dell/van Melkebeek.)

## Like this:

Related