Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The 50-year-old P-NP problem that eludes theoretical computer science (technologyreview.com)
100 points by 8bitsrule on Oct 28, 2021 | hide | past | favorite | 94 comments



If there already is a proof of P-NP which is correct but not yet recognized it can probably be found on this page: https://www.win.tue.nl/~gwoegi/P-versus-NP.htm

(If you can verify any proof using Isabelle or Coq PM me.)


> (If you can verify any proof using Isabelle or Coq PM me.)

Rofl. Yep, if i write a machine checkable proof of the most famous open problem in all of computer science, that's definitely the first thing i'll do.


If it indeed points to a n^100 algorithm (or p!=np), I’d do it. It’d greatly expedite collecting the Clay’s prize money to have someone recognizable validate the proof ;)

If it was n^10, well, likely different story, involving three letter agencies, I’m afraid.


>I’d do it. It’d greatly expedite collecting the Clay’s prize money to have someone recognizable validate the proof

yeah, but in my case (and presumably the previous commenter's case) I don't know who Zaik on HN is and if they are recognizable in the field. That they asked increases my feeling that it is likely they are recognizable but surely anyone who solves p!=np isn't going to just follow something as irrational as a feeling.


I mean its still a million dollars, and money makes people do crazy things. I'd, i don't know, figure out some way to cryptographically timestamp a hash of the proof first at least.


if you proved P=NP, time-stamping a hash might not be too secure :)


finally the killer app for the blockchain! Before I release my proof I will of course have to develop my ethereum app to verify origins of mathematical proofs.


Not really a new application for bitcoin. Trusted timestamping has been considered a secondary use case for bitcoin for a ling time. There's even services lìke https://en.m.wikipedia.org/wiki/OpenTimestamps


or like... tweet the SHA256 of the proof text.


Whenever I end up in this list I always wonder whether all these proofs have been refuted. I assume they must have been, considering the problem is still open. But it would be nice if the author had linked to the refutation of at least a few of them.

Edit: I now noticed that a few of them have:

3.[Equal]:... Zhu Daming, Luan Junfeng and M. A. Shaohan (all affiliated with Shandong University, China) refute these claims in their paper "Hardness and methods to solve CLIQUE" (Journal of Computer Science and Technology 16, 2001, pp 388-391).


> I always wonder whether all these proofs have been refuted

Sadly, no. I have talked to the author of one of those papers and apparently there is not a lot of motivation to check new proof attempts from the relevant people anymore.


It is probably because the proof is either relativizing, natural or algebrizing. It has been proven that those types of proofs are not going to work. The people who submit these kind of proofs often do not understand that they are never going to work. Why would you, as a serious mathematician that understands this to spend your energy to refute some proof for which you know that it already has been refuted. It is the same why designs of peperpetium mobilium devices are not being refuted.


Presumably such a formalization is precluded by easy formalizations of executions of Turing machines; already computer scientists describe Turing machines and their executions in most proofs in extremely informal ways because of their tediousness. It would already be a miracle if we had clear formalizations of proofs of major complexity results regarding Turing machines.


You might be interested in this: https://github.com/uds-psl/coq-library-complexity


TIL!


Sadly, this page hasn't been updated since 2016. I'd love to see some of the more recent attempts at proofs collected somewhere!


I have low hopes for P=NP. I really don’t see how you can solve for example the TSP exactly to guaranteed optimality without utilizing all of the useful available information. Approximation approaches show that by discarding information you give up optimality guarantees.

There must be something in information theory to help with proving that to construct a solution to an NP hard problem you need at least that much information.


I hope nobody proves that "P!=NP" because that would just be too boring, too expected.

I think the coolest possible outcomes would be one of the following:

(a) some bizarre algorithm which solves an NP-complete problem in polynomial time but with utterly intractable exponents or constants, for example worse than O(n^(10^100)) – so, yes, P=NP, in a completely useless way, and still nobody would know if a useful P algorithm for an NP-complete problem exists

(b) a non-constructive proof that P=NP

(c) an independence proof, that the question is independent of ZF(C)

(d) a proof that you can't prove "P!=NP" from ZF(C), but which left open the question of whether "P=NP" was provable from ZF(C) or not

(e) proof of something like "P!=NP implies not-CH" or "P!=NP implies not-AC"

I realise the odds are against any of the above. But, here's to hoping for the bizarre unexpected outcome, rather than merely "we just hadn't yet developed the technique to prove what most people thought was true all along" – which would be less profound, less mysterious

On a somewhat different topic, if my hope is for naught, and there indeed exists a proof that "P!=NP" out there waiting to be discovered, even actually discovered in a few years or decades or centuries time, then I've just expressed a counterpossible hope, a hope that an impossible world were actual – which, I think, is a philosophically rather interesting hope to express.


Regarding (b), a non-constructive proof of P=NP does not exist. Because such a proof is immediately constructive.

You can create an enumeration over all Turing machines, and run each of them on the problem, 'diagonally', so we perform one step of computation on machine 1, then 1, 2, 1, 2, 3, 1, 2, 3, 4, etc. This will eventually run each Turing machine for an arbitrary amount of steps. As soon as one of the Turing machines accepts, we also accept.

Suppose the nth Turing machine is our poly time machine we have proven to exist (although we don't know n). It takes n steps of our meta algorithm to run this machine 1 step, then n+1 metasteps for the next step and so forth. In total it takes n + n+1 + ... + n+k = O(k^2 + kn) metasteps to run the nth Turing machine k steps.

But here's the crux... n does not depend on the input size, only on our problem! Thus it is a constant. In other words, this meta algorithm is quadratically worse (due to k^2 steps) than the optimal algorithm, but it is poly time.

Note that the above algorithm doesn't work for co-NP, in which we require that the "no" instances get solved in poly time (in NP we only require the "yes" instances to get solved in poly time).


In case anyone wonders, I think the same argument is made in a somewhat easier to understand way in one section of the the P vs NP wikipedia page [2].

But I think that would turn (b) into a special case of (a) i.e. a bizarre algorithm that is completely useless. And it would also be an algorithm of mostly unknown complexity (with P=NP, we'd know it to be ∈ P, but we won't necessarily know anything else).

[2] https://en.wikipedia.org/wiki/P_versus_NP_problem#Polynomial...


> You can create an enumeration over all Turing machines

Do you mean "all Turing machines that only accept the 'problem'"?

Because if you literally enumerate over all Turing machines, including trivial ones like those that ignores the input and just acceps/halt no matter what, then it doesn't help you solve the problem at all...

And then the question becomes whether you can enumerate over all Turing machine that accepts only languages of the 'problem'... (or whatever they call it in the standard terminology). I have a feeling this one might not be "computable" regardless of the "input size".

PS: I'm not quite thinking straight now so apologies if I missed something obvious..


> Because if you literally enumerate over all Turing machines, including trivial ones like those that ignores the input and just acceps/halt no matter what, then it doesn't help you solve the problem at all...

I think there is a step which was not explicitly stated in the GP comment. When a given Turing machine accepts, we check whether its output is a valid proof of the answer to our problem. Per definition of NP, we can check that proof in polynomial time. If the check passes, we accept. If the check fails, we ignore that accept and move on.

So we are executing all possible Turing machines in parallel, not just the ones that accept the problem. And when any of them accept, we have to check whether the accept should be ignored or not, because most of the Turing machines are doing something completely unrelated to our problem.


Yeah after I commented I read the wikipedia link in a sibling comment -- which basically says what you said. Having programs/Turing Machines generate 'proofs' and checking them in P does seems legit.


> You can create an enumeration over all Turing machines

Wait, can you?


Yes. Every possible Turing machine can be defined using a finite number of symbols drawn from a finite alphabet. Each string in a finite alphabet corresponds to an integer, so for every possible Turing machine there is a corresponding integer. There are many conventions for assigning an integer to each Turing machine, but the choice of convention does not matter (so long as it is computable) - the point is, since we can enumerate the integers, and any computable subset thereof, we can enumerate all Turing machines.

Note this only applies to standard Turing machines (whether single tape or multi-tape, deterministic or non-deterministic). It does not apply to special types of Turing machines such as oracular Turing machines.


This is really a quibble turning on how one defines “constructive”. Let me propose two scenarios, which I will call NC and C. I believe that both scenarios are epistemically possible (given current generally available knowledge), but we do not know if either is logically possible.

In scenario NC, a mathematician devises a highly novel, unexpected and correct proof that P=NP, by finding a way to translate that statement into a statement in some seemingly unrelated area of mathematics, and then using some very deep and profound and innovative techniques to prove that statement in that seemingly unrelated mathematical area. Now, suppose those techniques are fundamentally non-constructive, in that they rely on methods of a kind of which no consistent mathematical constructivist could approve. This kind of proof would not directly turn on finding some novel algorithm for some NP-complete problem and directly proving that the algorithm executes in polynomial time. The only algorithms invoked may be some relatively straightforward algorithms needed to translate P=NP into that very different mathematical area, and then the real genius of the proof may be done in that other area and not explicitly rely on any algorithms at all. This would be a fundamentally non-constructive proof.

By contrast, in scenario C, we have a very different proof that P=NP, which might have as its centrepiece some very novel and obscure and maybe even bizarre algorithm, along with a proof that the algorithm correctly and exactly solves some NP-complete problem, and also a proof that the algorithm executes in polynomial time, and those two proofs may depend on quite subtle details of that specific algorithm. Let us suppose that the profound mathematical genius who devised this algorithm and the associated proofs has great sympathy for mathematical constructivism, and hence does not use in the proof the kinds of techniques of which a mathematical constructivist would disapprove (or, at least, not directly). Maybe even, those proofs are relatively straightforward once you have the algorithm, and coming up with the algorithm was the real genius of the overall proof

Now, you’ve pointed to an example of an already well-known algorithm which solves an NP-complete problem in polynomial time, but only if P=NP. And you are claiming that turns any non-constructive proof into a constructive one. But in scenario NC, our proof was non-constructive in two ways (a) unlike the proof in scenario C, this proof does not have as its centrepiece the construction of a specific example of a novel polynomial time algorithm for solving an NP-complete problem; (b) and unlike the proof in C, it relies on mathematical techniques of which a mathematical constructivist would disapprove. When we combine it with your proposal, it does not alter either of those two ways in which the proof in scenario NC is non-constructive, and so does not actually convert a non-constructive proof into a constructive one as you claim it does.

The term “constructive” has multiple meanings, and even if your argument succeeds in converting a “non-constructive” proof into a “constructive” proof in one sense of “constructive”, it fails to do so in other important senses.


I'm with you here. The bizarre, unexpected, possibly practically unusable result would be far more interesting than the very much expected P!=NP.

Maybe we're spoiled kids of modern TV shows and it shows in our expectations of a good cliffhanger or plot twist in modern science :D


Exhibiting an algorithm for any real problem with O(n^k) complexity for k>>1 would be interesting. This is one of the many interesting aspects of this problem. In the real world, for problems people actually care about, k is invariably small, like <<100 and no one knows why. This is one of the many, many vexing aspects of this problem.


Intuitively i feel like it makes sense though. The k in O(n^k) roughly corresponds with how many times you loop through the data. Its really hard to imagine a useful algorithm where you need to loop through the data exactly 153 times but then stop there. Either you're going to need to grow the number of loops with input size (be exponential) or surely you got all the data you needed the first 14 times you went through the data.

I guess that's not really very rigorus it just feels intuitively right. I just can't imagine a system where you always gain new information regardless of input size on loop iteration 561, but never on iteration 562.

Its somewhat similar to how in pure math problems where it seems kind of suspicious if the answer is something like exactly 8.3781 and the problem statement didn't have any related constants in it. Integers are expected, Irrational numbers are expected, decimals with exactly 4 digits sound suspicious.


What about page 4 of [0]:

> We remark that, while polynomial, the running time of the algorithm is somewhat abysmal; loose estimates places it somewhere around O(n^(10^100)); the running time of the algorithm of [RT12] is similar.

I don’t know if that counts as a “real problem”, and I suppose it is only a “loose estimate” too.

[0] https://arxiv.org/pdf/1205.0458v2.pdf


No, I think that definitely counts as a counterexample. Wow. TIL.


What would be cool, too, would be a proof that yeah, P != NP, sure, but there is an algorithm that can solve any NP-complete problem with a complexity of O((1+Ɛ)^n), with Ɛ very small, so still exponential but solveable in practice in most cases, and we don't know what that algorithm is, we just know it exists.


I mean the simplex algorithm practically scales linearly for most of our cases, to the point that we assume today Linear Programming a solved problem. We can solve problems with billions of variables and constraints that create a simplex with an astronomical number of vertices. And we can do all these with out of the box solvers.

Of course the devil is still there, you can find very small problems that completely break the simplex algorithm.


Would be funny if it came out that all algorithms are P=NP except for vanishignly small subset of cases (which of course is somehow not negligible in practice) and to distinguish them is incomputable :)


> (b) a non-constructive proof that P=NP

Ouch, that would be so, so frustrating :D


See my other comment, it would also immediately be constructive: https://news.ycombinator.com/item?id=29022963


I would prefer option d)


How is n^10^100 polynomial?


O(n^k) is polynomial for all k > 1, and that remains true when k=10^100


n^2 is polynomial, 2^n is exponential (variable is in the exponent)


Duh. I feel stupid. Thanks for the kind and factual explanation.


I don’t know the history of it, but I can imagine people thinking the same thing about comparison sorts early on; how could you possibly sort a list without comparing every element to every other element?

I mean, I agree with the conclusion that P probably doesn’t equal NP, if nothing else because we’ve had an awful lot of very smart people looking at it without any inkling to the contrary. But I don’t generally trust “I don’t see how” intuitions with this sort of thing.


"I don't see how" can't be trusted coming from someone who hasn't even tried. if you're sorting a pile of stones by weight you quickly learn to leverage the transitive property


> I can imagine people thinking the same thing about comparison sorts early on; how could you possibly sort a list without comparing every element to every other element

Maybe people who never tried (or thought about) reshelving library books, sorting/delivering mail, etc..


Indeed the very word "sorting" in its general sense means arranging items into categories (ie. equivalence classes) which doesn’t usually involve pairwise comparisons at all.


> I have low hopes for P=NP. I really don’t see how you can solve for example the TSP exactly to guaranteed optimality without utilizing all of the useful available information.

Not sure what you're getting at here. The available useful information is a set of cities each with distances to the other cities. About N^2 pieces of information. A computer can "utilize" all these data in polynomial time.


The information needed to solve the tsp problem is not the distances matrix. It’s the set of Hamiltonian paths in your graph. And that scales factorially with the number of cities.

It is the subtour elimination (remove cycles) that makes the tsp problem difficult. Not the connectivity.


All the information about the Hamiltonian paths is encoded in your N verticies and N^2 edges.

Your arguement suggests the best solution to TSP runs in Θ(n!).

However https://en.m.wikipedia.org/wiki/Held%E2%80%93Karp_algorithm can solve TSP in Θ(2^n n^2)


Which means that many of Hamiltonian paths are strictly dominated than others, so you don’t have to evaluate them at all. Same as in sorting you don’t have to evaluate all of the possible pairs.

I did not imply that all information is useful. I proposed that probably there is a lower bound of information that you definitely need to find the optimal solution to the tsp.


Correct me if I am wrong, but I don’t think the decision problem “does an optimal path in TSP have length K” is in NP (as it likely cannot be checked in polynomial time).

Otherwise, we could solve optimal path TSP in polynomial time by binary searching over K.

https://blog.computationalcomplexity.org/2014/01/is-travelin...


That isn’t quite how the decision problem is usually phrased. It’s usually “Is there a path of at most length L?”, which is pretty easily in NP.

Minor difference in wording but a big difference in complexity. Asking for the optimal solution of most of the classic NP problems will make them not NP.


I was aware of that distinction (of the decision problem), but I didn’t make the connection that solving the canonical decision problem in polynomial time meant that you could also solve the optimization problem in polynomial time with binary search, which is true.

But the linked post also claims checking whether a path is optimal is not in NP. Even then, P=NP would still imply TSP could be solved in polynomial time. So I was kinda wrong, for subtle reasons…


Asking whether a given path is optimal is in co-NP. You can prove that the path is non-optimal by exhibiting a shorter path.


Given a solution to the decision version of TSP, you can verify whether that solution is valid in polynomial time - just check if it does visit the nodes and is of the claimed length. A normal Turing machine could perform this check in poly time, and a nondeterministic one could check all solutions (exponentially many) in poly time, hence NP.


Information and computation ("utilization of information") are not equivalent things. The solution is there, staring us right in the face, it's just our primitive methods of sequential operations on a silicon abacus are incapable of rendering a solution immediately. Infinitely parallellize operations, such as with a "quantum computer" and suddenly what is infeasible in one era becomes quite feasible in another. I don't think it's right to equate information theory and computing, because one is per-se data and its solubles/solutions while the other is step-wise sequential logic to infer some step-wise sequential fact.


P?=NP isn't the question of, "If given an infinite number of monkeys can one compute an exponential problem really quickly?"

Its the question of, "Does a mapping exist from that fast infinite monkey machine to a finite monkey machine that runs in polynomial time?"

I think parent's question is "Can the problem be encoded such that one can prove that no translation can prune the number of monkeys required at an exponential rate?" I have wondered that myself, but never found any particularly useful answer.


Yes, but is it right to approach this question from sequence just because computation implies sequence? Mathematics has an equals sign and suggests not the computational sequence, just the logical sequence of our axioms mapped to self-evident fact. Mathematics to Information Theory to Computation naturally goes from clear postulates and language-able axioms to water-wheelable traits. I suggest there is some room for exploration in between.


A quantum computer is not "infinitely parallel" and though it's still an open problem, it's considered unlikely that that quantum computers can solve NP-complete problems in polynomial time.


>A quantum computer is not "infinitely parallel"

Explain


My take on it: you get "infinitely parallel" computation, generating "inifinetly parallel" results that are in superposition. Problem is, at the end you can only access your "infinitely parallel" result via a classical measurement, involving a wave function collapse [1].

For some problems, people have found ways to extract useful information via classical measurements, e.g. Shor's algorithm (in theory breaking RSA/DSA/ECDSA/DH/ECDH style public-key algorithms [1]). However in the general case this does not work (so AES and hash algorithms are safe for now).

[1] https://en.wikipedia.org/wiki/Wave_function_collapse

[2] https://en.wikipedia.org/wiki/Shor%27s_algorithm#Quantum_par...


I mean that's pretty much it. You can't use a quantum computer like a classical computer with a huge (infinite) number of compute cores. Its not a gpu. That's not how it works.

If you use the metaphor that superposition is like computing many things in paralell, the problem comes in that when you measure. The superposition collapses to a single answer at random (with probability related to the amplitude of each possibility) which will usually not be the answer you're interested in.


I think it's time someone had the talk with you: https://www.smbc-comics.com/comic/the-talk-3

Quantum computing doesn't do what you think it does.


Also, even if it did, that wouldn’t change the status of P=NP, which is a statement about problems computed by classical Turing machines


>a statement about problems computed by classical Turing machines

Are we out of tape already?


"A new ontological category"... more like multiple golf balls competing for the same hole. "Choreographing interference patterns" would not actually further computation, but simply suggest a new set of tools that could be, by accident or happenstance, calibrated to produce results where other devices could not. Is that your point?


He's just stating that there could be an equivalence class that could get us closer to P=NP and it doesn't have to be computable. The complexity class he mentions is `P^(NP[k])`, 'P With k NP Queries(for constant k)'.


I have no idea how you could have possibly got that from anything Sova wrote. Nothing they wrote seems to remotely resemble that.


I'm just trying to be helpful. :(


I apologize. I could have (and should have) wrote that comment in a way that was less confrontational.


Rock n roll dude. Suggest that Tiger Woods can get a hole in one every time and people attack you as if you had questioned their fundamental religious beliefs. Ramanujan created an approximating function for pi in 1904 and nobody knows how it works. Packed with factorials to the gills. It's not like the digits of pi are a secret to be computed every time and to try and snag a spare computation cycle here and there -- they are what they are. And finding a pattern comes down not to the self-evident information, but to the method.


I'm pretty sure you're being downvoted because you're being incoherent, not because of anything you've suggested.


And even if it did, P=NP is based on a model of computation. So even if there is a proof of P=NP with quantum computers, that could still leave the question open on classical computers.


> So even if there is a proof of P=NP with quantum computers

That doesn't really even make sense as a sentence. A quantum computer is neither a Turing machine nor a non-deterministic Turing machine.

I guess what you're trying to say is that even if someone proved/disproved BQP=NP it would leave P=NP open.


Now that's a notion I do not often hear. Maybe a question for a 3D-abacus made of stones and flowing water.


Not so coincidentally, Scott is the first person quoted in OP's article, as well as the co-author of that comic.


“The Blog of Scott Aaronson

If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel.”


Only a madman would try all possible "solutions."


> P is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think a cubic time algorithm is very slow, but that's just peanuts to P.

I hate that P is often understood as "easily solvable". There's problems inside P that require O(n^A) time algorithms, where A is some scaringly large ackerman number. And that just barely scratches the surface of P. There are other, unimaginably harder, problems in P.

It is very arrogant for us to say that P<NP, when we still know so little about P.

I'm rooting for P=NP, even if this only serves to explore harder problems inside P, with terrifyingly slow algorithms.


And don’t get me started in PSPACE

There are so much we don’t know about how space & time complexities commune

The weird thing is that GPT3 Codex can perhaps be used to solve some (space-constrainted) subsets of problems that are combinatorial weird. Maybe by pouring all these training data to it it is approximating some unimaginable algo in P that we have no knowledge about

Will be wild if there is some (time-space) correspondence; NPSPACE = PSPACE Implies NP = P would be interesting


Easily solvable in the context of all other problems being harder ;)


I forget my CS in old age. Am I correct in saying, roughly speaking, that everybody knows that P!=NP but we just can’t prove it?

Sort of like Fermat’s Last Theorem: the proof is out there.


A mathematician can write an equality without anybody raising an eyebrow, but as soon as someone suggests that we can choose the optimal answer on the first "guess" everyone wants to rekindle the Spanish Inquisition. If we can go from "guess" to "obvious solution" then it stands to reason that P=NP. Of course, this is a debate that likely has no clear path without some stroke of insight, because the axiomatic "truth" that confirms it one way or the other is not reachable by induction alone.

There is likely an island of consistent axiomatic logical statements that we must not only wait for someone to reach by happenstance, but to decipher and know what to do with them. According to Fermat, his proof was omitted due to lack of space in the margin. I suppose it's good to let others share in the glory now and again, if on centuries-delay.


So what you're saying is:

If there exists an algorithm that always guesses correctly then we can solve NP problems?

That's just the definition of NP restated. One of the definitions of NP is all the problems that can be solved in polynomial time if at every choice we had a method of always guessing correctly.

The question of NP=P is if such a method can actually be implemented in polynomial time on a Turing machine.

So yes, you are correct, it stands to reason that if NP=P, then NP=P.


I don't know whether the GP intends this, but one fun way of understanding the GP's comment is:

If you believe really so strongly that P!=NP without actual proof, you're actually just making a guess, and then saying you can make good guesses. But the process of doing so implies you actually believe P=NP.


What?


There are a few notable exceptions who believe that P=NP, Knuth being the most notable in my opinion. He gives a justification (though it is a bit technical) for this that I also find convincing.


I asked Scott Aaronson what he thought about Knuth's ideas, and he said something along the lines of when you're as famous as Knuth, you can get away with saying nonsense like that.


So according to this poll http://www.cs.umd.edu/~gasarch/papers/poll.pdf only about two thirds think P!=NP (unless you also count the independent group as also believing we cant prove it).

So seems like the majority opinion but not universal.


His later polls seem to have trended towards higher percents: https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pollpaper3.p...


What everyone agrees on though is that NP problems are slow to solve by today's known algorithms.



The OP has:

> The million-dollar question posed by P vs. NP is this: Are these two classes of problems one and the same? Which is to say, could the problems that seem so difficult in fact be solved with an algorithm in a reasonable amount of time, if only the right, devilishly fast algorithm could be found? If so, many hard problems are suddenly solvable. And their algorithmic solutions could bring about societal changes of utopian proportions—in medicine and engineering and economics, biology and ecology, neuroscience and social science, industry, the arts, even politics and beyond.

Okay, I'd like to have a list of such problems! Yes, some of them are 'intractable', but not all.

But we are not attacking the problems that are solvable easily enough now and that would

"bring about societal changes of utopian proportions—in medicine and engineering".

We are not solving such problems because too few of them exist, that is, mostly the problems don't exist.

Here's what we can do now and have been able to do easily enough for decades:

The challenge of P versus NP is for (1) exactly optimal (for optimization problems) solutions to (2) worst case problems in (3) polynomial time.

But if we will settle for 99+% of the savings of optimal solutions for problems that are not worst case but still constitute nearly all the practical problems in reality, then usually we can do well.

E.g., 0-1 integer linear programming problems are in NP-complete. The last such problem I attacked had 40,000 constraints and 600,000 variables. I found a feasible solution within 0.025% of optimality in 900 seconds on a single processor computer using ordinary linear programming and some non-linear duality theory.

Blunt, practical fact: In practice, there just are not many practical NP complete optimization problems that people want solved. For practical NP-complete optimization problems, we can usually do well, save nearly all the money to be saved with an optimal solution, for reasonable effort, but, in practice, we don't do that very often because there are not many people with such problems who want such solutions -- for the savings, they just don't care enough even to pick up a phone.

The claim that an algorithm that shows that P = NP would

"bring about societal changes of utopian proportions—in medicine and engineering ..."

is just not true -- the economy does not have many such problems. If the problems were there, then people would be successfully attacking those problems now; the "utopian proportions" would create some recruiting. But the recruiting is not there. Over decades sending many hundreds of resume copies showing such problem solving abilities yields no responses. A person good at such optimization faces deep and profound unemployment -- they just CANNOT be hired, at any price, are just NOT wanted. Quite literally, they would have a better career, e.g., make more money, with a simple, ordinary grass mowing service -- no joke or exaggeration.

For the problem with 40,000 constraints and 600,000 variables, I got that via email from two guys, a CEO and his buddy. The buddy had tried simulated annealing, run for days, and quit without knowing how close the results were to optimality. Getting the problem via email, in two weeks I did the non-linear duality derivations and wrote and ran the software, for free. Via email I sent the news of success back to the CEO. He was totally not interested: My success would have embarrassed his buddy, and that meant that my work was not wanted, even for $1. Blunt fact: Not many people have any interest at all, and of those nearly all have very little interest.

Point: The claim

"bring about societal changes of utopian proportions—in medicine and engineering ..."

is nonsense because the needed people with the problems are far too few.


Indeed, optimization problems aren't all that exciting because there are usually efficient heuristics that get you 99% of the way there.

A useful NP problem is generating code to meet some requirements. Instead of programming, you can write just some tests and get some code that passes them. Deep learning is a computationally feasible way of automatically finding a function that approximately matches some data points, but it doesn't always generalize well. Searching for the shortest program that matches all the data points should generalize better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: