The FOCS 2010 accepted papers were announced a few days ago. This is a quick summary of a few papers that caught my eye.

Shiva Kintali has kindly compiled a list of the FOCS 2010 accepted papers, with links to the papers where available.

Last year I mentioned Benjamin Rossman’s monotone complexity result for k-CLIQUE.

In May, Richard Lipton discussed the Arora/Barak/Steurer subexponential algorithm for Unique Games.

Overall, there seems to be a lot of use of probabilistic techniques, and also a strong algorithmic focus.

Ankur Moitra and Greg Valiant extended their paper with Adam Tauman Kalai at STOC 2010 on efficiently learning mixtures of Gaussian distributions, to more than two distributions. They show that there is a polynomial-time algorithm to estimate the parameters of the mixture. Via a simple construction, they also provide the noteworthy lower-case result that the complexity depends exponentially on the number of distributions in the mixture, even for univariate distributions. The open question is how to turn these ideas into a practical algorithm, or at least one with asymptotically optimal runtime. A second paper, by Mikhail Belkin and Kaushik Sinha, deals with the same subject. They additionally show that parameters of a univariate distribution from the so-called polynomial family (which includes many of the most basic distributions such as exponential, Poisson, binomial, and Gamma) can all be learned in polynomial time, but don’t deal with lower bounds.

(At this point I can’t resist some necromancy from the STOC 2010 list. Ryan Williams neatly tied together more lower and upper bounds, by showing that small improvements over brute force search would provide lower bounds such as separating LOGSPACE from NP. Theorem 5.1 is noteworthy, showing that if P = NP, then there is a language that can be recognized with bits of nondeterminism in time, but which cannot be solved in time and polylogarithmic space. See also the paper with Mihai Pǎtraşcu at SODA 2010 which I’m pretty sure wasn’t coauthored by Florham Park, whatever Google Scholar’s author recognition algorithm thinks.)

Zdeněk Dvořák, Daniel Král, and Robin Thomas showed that first-order properties of sparse relational structures can be decided in linear time in classes with bounded expansion. The tool is a data structure which can be initialized in linear time, and which allows testing for a first-order property in constant time. The key results were obtained independently (but without the nifty data structure) by Anuj Dawar and Stephan Kreutzer for nowhere dense graphs, which includes those of bounded expansion.

Alexandr Andoni, Robert Krauthgamer, and Krzysztof Onak introduce a new asymmetric query model and apply it to the distance threshold estimation problem, a form of approximate edit distance. The model only charges for queries to one of the input strings. One consequence is that repetition in the inputs can lead to an exponential increase in complexity.

Lots of nice papers to digest! A few other papers would have been interesting to highlight, but they are not online. Oh well.

### Like this:

Like Loading...

*Related*

## FOCS 2010 accepted papers

Tags: conferences, papers

The FOCS 2010 accepted papers were announced a few days ago. This is a quick summary of a few papers that caught my eye.

Shiva Kintali has kindly compiled a list of the FOCS 2010 accepted papers, with links to the papers where available.

Last year I mentioned Benjamin Rossman’s monotone complexity result for k-CLIQUE.

In May, Richard Lipton discussed the Arora/Barak/Steurer subexponential algorithm for Unique Games.

Overall, there seems to be a lot of use of probabilistic techniques, and also a strong algorithmic focus.

Ankur Moitra and Greg Valiant extended their paper with Adam Tauman Kalai at STOC 2010 on efficiently learning mixtures of Gaussian distributions, to more than two distributions. They show that there is a polynomial-time algorithm to estimate the parameters of the mixture. Via a simple construction, they also provide the noteworthy lower-case result that the complexity depends exponentially on the number of distributions in the mixture, even for univariate distributions. The open question is how to turn these ideas into a practical algorithm, or at least one with asymptotically optimal runtime. A second paper, by Mikhail Belkin and Kaushik Sinha, deals with the same subject. They additionally show that parameters of a univariate distribution from the so-called polynomial family (which includes many of the most basic distributions such as exponential, Poisson, binomial, and Gamma) can all be learned in polynomial time, but don’t deal with lower bounds.

(At this point I can’t resist some necromancy from the STOC 2010 list. Ryan Williams neatly tied together more lower and upper bounds, by showing that small improvements over brute force search would provide lower bounds such as separating LOGSPACE from NP. Theorem 5.1 is noteworthy, showing that if P = NP, then there is a language that can be recognized with bits of nondeterminism in time, but which cannot be solved in time and polylogarithmic space. See also the paper with Mihai Pǎtraşcu at SODA 2010 which I’m pretty sure wasn’t coauthored by Florham Park, whatever Google Scholar’s author recognition algorithm thinks.)

Zdeněk Dvořák, Daniel Král, and Robin Thomas showed that first-order properties of sparse relational structures can be decided in linear time in classes with bounded expansion. The tool is a data structure which can be initialized in linear time, and which allows testing for a first-order property in constant time. The key results were obtained independently (but without the nifty data structure) by Anuj Dawar and Stephan Kreutzer for nowhere dense graphs, which includes those of bounded expansion.

Alexandr Andoni, Robert Krauthgamer, and Krzysztof Onak introduce a new asymmetric query model and apply it to the distance threshold estimation problem, a form of approximate edit distance. The model only charges for queries to one of the input strings. One consequence is that repetition in the inputs can lead to an exponential increase in complexity.

Lots of nice papers to digest! A few other papers would have been interesting to highlight, but they are not online. Oh well.

## Like this:

Related