The consequences of P = NP would be “potentially stunning”, according to Stephen Cook, in his Millennium Prize overview of the P versus NP problem.
Is this really true?
Richard Lipton has argued eloquently that we should consider the possibility that P may equal NP. In this spirit, I want to briefly discuss what P = NP would imply.
As is usually noted in a discussion of P = NP consequences, one cannot simply say “if P = NP then wondrous things happen”. What is needed before we can conclude anything “potentially stunning” is an additional hypothesis along the lines of “there is a practical algorithm for solving an NP-complete problem”, by which Cook appears to mean “we can solve some NP-complete problem in time for smallish “. For instance, an algorithm that somehow exploited the monster group to solve an NP-complete problem, in time, would probably not have practical applications. So the additional condition is important.
Yet, even a low-degree polynomial bound for an algorithm for an NP-complete problem would not be enough for any special implications for most problems in P. If an algorithm existed to decide TSP in time , then it might be possible to improve the way courier companies allocated parcels to drivers. At the same time, factoring might require time, but an algorithm achieving this bound would mean little for cryptography.
Many popular accounts of P versus NP seem to be based on a belief that a fast algorithm for some NP-complete problem would let us solve any problem in NP quickly. But this is not true.
Denote by some problem in P which is in but not in . The Time Hierarchy Theorem guarantees the existence of for each positive This is true regardless of the status of P versus NP.
So there are problems in P that are not “easy”, even if SAT could be solved in linear time with a small constant. If a problem is not in then for any deterministic Turing machine deciding the problem, and any constant infinitely many instances of the problem require the machine to take more than steps. I think it is reasonable to say that in this case, for large , the problem is not “easy”.
So far the consequences of P = NP do not seem all that impressive, but there is one more thing worth noting.
Many of the known reductions from SAT to NP-complete problems are quite efficient. This means that if one of these NP-complete problems could be solved by an algorithm that requires at most time, then most NP-complete problems would also have time complexity “close” to . So if P = NP then the asymptotically hardest problems in P (and NP) would be the problems. These certainly would not look like most of the usual NP-complete problems.
So we need to add another condition to the list: if P = NP, and there is a fast algorithm for some NP-complete problem, and the hard problems in P are of no practical interest, then some potentially stunning things would be true.
I think this third hypothesis is quite interesting.
Reductions to NP-complete problems are all via SAT. The universal reduction in the Cook-Levin Theorem turns an instance of a problem in NP into an instance of SAT. If the instance of is of size , and the polynomial upper bound on the time required for a multi-tape non-deterministic Turing machine to decide is , then the size of the SAT instance is (see Stephen Cook’s proof of this).
This means that the key issue (in a P = NP world) is how efficiently a language can be recognized by an NDTM. Most of the commonly studied problems in NP have certificates that can be verified in linear time. So these problems are close to SAT, and are not going to be the languages.
The important question is then:
Question: In NP, which are the languages with hard-to-verify certificates?
If P is not equal to NP, then there are problems in NP that are asymptotically harder than any problem in P. So the efficiency of reductions to SAT is not as important in this case. But even independently of P versus NP, it would be worth looking for natural candidates for problems for each , as well as problems that might require time to solve on an NDTM. Proving non-trivial lower bounds is hard, but at least we could begin with some candidate problems!
(Edit 20100223: added link to Stross story, added clarification “for large r”. Edit 20100309: removed spurious “not NP-complete” phrase.)