Constraints

15 August 2010

A simple reduction

Filed under: Commentary — András Salamon @ 3:01
Tags: ,

Recent online discussion about computational complexity included the notion that the structure of solutions of SAT might be useful in proving separation results. I want to point out a further obstacle to this idea.

Previously I discussed k-ISOLATED SAT, the decision problem which attempts to ramp up the hardness of SAT by requiring not only a solution, but one that is k-isolated (so that any other solution differs in at least k+1 variables). k-ISOLATED SAT is tangentially related to the question that Valiant and Vazirani discussed in their 1986 paper NP is as easy as detecting unique solutions: if we require our Turing machine to only provide a correct answer when the input instance has precisely one solution, how does the difficulty of the problem change? This modifies SAT into a promise problem.

Ryan Williams discusses the randomized reduction from SAT to the promise problem, one of Valiant and Vazirani’s results, as one part of a critique of Deolalikar’s manuscript. Williams indicates that a derandomization hypothesis such as P NP = RP is generally believed to hold. Personally I don’t have a strong feeling either way, as some reasonable things could be true in world where these classes do not coincide. However, NP P = RP would mean that the Valiant-Vazirani reduction could be made deterministic.

Here I wanted to point out an even weaker reduction. It seems obvious but might still be interesting. This reduction shows that any separation of the solution space of k-SAT must be quite strong, stronger than Deolalikar currently seems to be considering. The separation argument in his draft (in the original, through to the most recent third version) seems to hinge on a gap between the lower bound of \Omega(n)-isolation versus the upper bound of O(polylog(n))-isolation of solutions. This is not completely clear, because there is an undefined notion of probabilistic “parameters” that he uses, and he also uses O(n) in several places to indicate the lower bound, so maybe he actually meant something else.

I now show that one can reduce SAT to n^{1-\epsilon}-ISOLATED SAT, for any fixed \epsilon > 0. This rather narrows the wiggle room in any proof strategy that relies on showing a gap exists.

Simply apply the following deterministic reduction, that doesn’t rely on any derandomization assumptions. Let k = \lceil 1/\epsilon - 1 \rceil, and create n^k new variables for each of the n original variables in the instance. Then add extra clauses to enforce that the same values are assigned to each of the n^k variables associated with each original variable. This creates a new instance with n^{k+1} variables and polynomially many clauses. If we adjust for the increased number of variables, the reduction is then from SAT to n^{1 -\epsilon}-ISOLATED SAT.

In other words, any argument that considers the structure of the solution space in terms of how far apart solutions are, must be able to work with the gap that is left between n^{1 - \epsilon}-ISOLATED SAT and n-ISOLATED SAT.

But wait, there’s more! The reduction has further consequences which are worth examining on their own merits.

Also related to k-ISOLATED SAT is the complexity class UP. UP consists of those decision problems in NP where every instance has precisely 0 or 1 solution. This class seems to be somewhere between P and NP in the complexity hierarchy, but it is not known precisely where. Note that n-ISOLATED SAT is precisely the UP version of SAT: if there are n variables then n-isolation ensures there can be at most one solution, and if there is at most one solution, then obviously solutions are n-isolated.

The gap is intriguing even ignoring any of the current debate. We can reduce SAT to SAT with up to n^{1-\epsilon}-isolated solutions. It is not obvious how to make a deterministic reduction going the other way, so it seems a reasonable guess that the difficulty increases monotonically up to n^{1-\epsilon}-isolation. On the other hand, n-isolation of solutions is in UP, so seems to be easier than the original complexity of the problem. So it looks like increasing the isolation starts making the problem easier, from some point onward. The question is then:

At what value of the parameter k does increasing the degree of isolation make k-ISOLATED SAT become easier? Further, does the difficulty decrease monotonically from then on, or does the complexity have a more interesting behaviour?

(Note added just before “going to press”: have now seen vloodin’s post from some hours ago, which appears to have a similar idea in mind, and Terence Tao’s comments about how he interprets “polylog parameterizability”. I will leave these comments to stand as they are, perhaps someone will find them useful or point out any problems.)

(Edit 20100815: NP -> P, thanks to Ryan Williams for pointing out the error.)

About these ads

2 Comments »

  1. Hi, note that NP=RP is NOT believed to hold. But it is believed that the reduction from SAT to Unique SAT can be derandomized.

    Comment by Ryan Williams — 15 August 2010 @ 3:57 | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: