Constraints

10 February 2010

P=NP consequences

Filed under: Question — András Salamon @ 1:30

The consequences of P = NP would be “potentially stunning”, according to Stephen Cook, in his Millennium Prize overview of the P versus NP problem.

Is this really true?

Richard Lipton has argued eloquently that we should consider the possibility that P may equal NP. In this spirit, I want to briefly discuss what P = NP would imply.

As is usually noted in a discussion of P = NP consequences, one cannot simply say “if P = NP then wondrous things happen”. What is needed before we can conclude anything “potentially stunning” is an additional hypothesis along the lines of “there is a practical algorithm for solving an NP-complete problem”, by which Cook appears to mean “we can solve some NP-complete problem in time $O(n^k),$ for smallish $k$“. For instance, an algorithm that somehow exploited the monster group to solve an NP-complete problem, in $O(n^{2^{180}})$ time, would probably not have practical applications. So the additional condition is important.

Yet, even a low-degree polynomial bound for an algorithm for an NP-complete problem would not be enough for any special implications for most problems in P. If an algorithm existed to decide TSP in time $O(n^9)$, then it might be possible to improve the way courier companies allocated parcels to drivers. At the same time, factoring might require $\Omega(n^{1024})$ time, but an algorithm achieving this bound would mean little for cryptography.

Many popular accounts of P versus NP seem to be based on a belief that a fast algorithm for some NP-complete problem would let us solve any problem in NP quickly. But this is not true.

Denote by $Q_r$ some problem in P which is in $DTIME(n^{r+1})$ but not in $DTIME(n^r)$. The Time Hierarchy Theorem guarantees the existence of $Q_r$ for each positive $r.$ This is true regardless of the status of P versus NP.

So there are problems in P that are not “easy”, even if SAT could be solved in linear time with a small constant. If a problem is not in $DTIME(n^r)$ then for any deterministic Turing machine deciding the problem, and any constant $C,$ infinitely many instances of the problem require the machine to take more than $Cn^r$ steps. I think it is reasonable to say that in this case, for large $r$, the problem is not “easy”.

So far the consequences of P = NP do not seem all that impressive, but there is one more thing worth noting.

Many of the known reductions from SAT to NP-complete problems are quite efficient. This means that if one of these NP-complete problems could be solved by an algorithm that requires at most $O(n^k)$ time, then most NP-complete problems would also have time complexity “close” to $O(n^k)$. So if P = NP then the asymptotically hardest problems in P (and NP) would be the $Q_r$ problems. These certainly would not look like most of the usual NP-complete problems.

So we need to add another condition to the list: if P = NP, and there is a fast algorithm for some NP-complete problem, and the hard problems in P are of no practical interest, then some potentially stunning things would be true.

I think this third hypothesis is quite interesting.

Reductions to NP-complete problems are all via SAT. The universal reduction in the Cook-Levin Theorem turns an instance of a problem $Q$ in NP into an instance of SAT. If the instance of $Q$ is of size $n$, and the polynomial upper bound on the time required for a multi-tape non-deterministic Turing machine to decide $Q$ is $p(n)$, then the size of the SAT instance is $O(p(n) \log p(n))$ (see Stephen Cook’s proof of this).

This means that the key issue (in a P = NP world) is how efficiently a language can be recognized by an NDTM. Most of the commonly studied problems in NP have certificates that can be verified in linear time. So these problems are close to SAT, and are not going to be the $Q_r$ languages.

The important question is then:

Question: In NP, which are the languages with hard-to-verify certificates?

If P is not equal to NP, then there are problems in NP that are asymptotically harder than any problem in P. So the efficiency of reductions to SAT is not as important in this case. But even independently of P versus NP, it would be worth looking for natural candidates for problems $Q_r$ for each $r$, as well as problems $Q'_r$ that might require $\Omega(n^{r+\epsilon})$ time to solve on an NDTM. Proving non-trivial lower bounds is hard, but at least we could begin with some candidate problems!

(Edit 20100223: added link to Stross story, added clarification “for large r”. Edit 20100309: removed spurious “not NP-complete” phrase.)

1. […] on from the previous post, the question is then: are there some extremely unlikely consequences of P = […]

Pingback by Is P=NP a reasonable hypothesis? « Constraints — 10 February 2010 @ 13:00

2. It is not hard to construct examples of problems from NP for which we do not know how to verify a certificate *efficiently* (say in linear time), but I don’t think you can easily prove there are indeed verifiable efficiently.

Comment by Standa — 11 February 2010 @ 18:46

• …there are indeed NOT verifiable efficiently.

Comment by Standa — 11 February 2010 @ 18:47

• Yes, agreed. But are there any natural ones — problems that correspond to something we would be interested in solving?

Comment by András Salamon — 11 February 2010 @ 21:45

3. […] to discuss these consequences in a little more detail. This is the counterpart to the question of what would happen if we knew that P = NP. Neither question has anything to do with whether a particular attempt at a proof works or not. […]

Pingback by P not equal to NP (P ≠ NP) consequences « Constraints — 13 August 2010 @ 20:43

4. […] A while back I asked whether there are any natural problems that are in but not in . […]

Pingback by k-ISOLATED SAT « Constraints — 15 August 2010 @ 14:21

Blog at WordPress.com.