Category of NP-hard languages

Thanks for the input, Indistinguishable. Perhaps I might look at this over Christmas.

Anyway, another complexity question: ftg commented that NP-hard problems don’t have any cohesive meaning outside of being NP-hard. Is this related to the fact that they do not correspond to any time/space bound on functions computable on a particular class of machines/automata?

That is, P corresponds to all polynomial time functions computable on a deterministic Turing machine (or equivalent), but are there any “useful” (cohesive) complexity classes that do not correspond to similar restrictions? (Perhaps this doesn’t have a factual answer one way or the other!)

Expanding on the mention of bounded and light linear logic, here’s a small bibliography on “Characterizing Complexity Classes via Categorical Constructs, Lambda-Calculi, Logics, and, most especially, Types” which might be useful. I should warn that I’m not really familiar with most of this; checking out the intro to J. Otto’s “Complexity Doctrines”, though, I get the impression that Hari Seldon might well be in the know.

Heh, this is starting to look more familiar, and I should have spotted the connection after your earlier post.

I helped referee a recent paper submission along these lines (implicit complexity) relatively recently. There’s been quite a bit of work done on this, especially by Girard, Turner (I think) and some Italian researchers (I think one of Girard’s logics corresponds, under Curry-Howard, to a lambda calculus where all terms normalize in a number of steps bounded by a polynomial, or something like that—it was a while ago now). I see also that Hoffmann’s work is somewhat similar, from the bibliography you linked to.

Anyway, this is slightly different from what I was suggesting (your first suggestion, considering all problems as objects with morphisms as arbitrary reductions is exactly what I was thinking of, but more generally), and that still looks like nobody has published anything on it before.

Ah, excellent, we’ve met in the middle. :slight_smile:

As for ultrafilter’s question, I think I can now give a more satisfying answer. Specifically, I shall show that, from any language not in P, we can compute another language such that neither of the two is Cook-reducible to the other. (This is directly analogous to the result in computability theory that every non-zero Turing degree is incomparable with some other one, and indeed, the proofs are essentially the same). From this it will follow that, if P != NP, we can apply this construction to an NP-complete language to get a computable language which is neither NP-hard (even in the general sense using Cook-reductions rather than Karp-reductions) nor in NP (nor coNP, nor anything else within the vicinity, being not even Cook-reducible to any language in NP). Thus, we will have very firmly established that “Every computable language outside of NP is NP-hard” holds if and only if “P = NP [so all languages are trivially NP-hard]”.

As for the proof: Let L be some language not in P (i.e., some function from inputs to Booleans). We shall compute from it a new language L’ as follows: At any moment, we have a table detailing the behavior of L’ on finitely many inputs; every possible input will end up in this table eventually, causing L’ to be well-defined. The table starts out empty and is augmented by the following process: We iterate through all the polytime oracle machines (these can be assumed to come tagged with explicit polynomial bounds, and thus can be assumed to halt with a YES or NO answer on every input). When we get to machine M, we do two things: First, we find the first input not yet on our table and run M on that input with the oracle L; whatever the answer given, we add the opposite to our table. This ensures that M cannot be a Cook-reduction from L’ to L, and will also ensure that no input avoids ending up in the table. Next, we simulate running M on all inputs, equipped with an oracle which agrees with our finite lookup table where defined; when a call is made to the oracle on a value not in our table, some trivial default response (e.g., always NO) is added to our table and returned. As L is not in P, we know that there is some input on which this simulation will yield a response diverging from L; once we have found such an input, we can stop the simulation, confident that we’ve now augmented our table sufficiently so as to ensure that M cannot be a Cook-reduction from L to L’. At this point, we move on to the next polytime oracle machine and repeat the first step.

Since every possible candidate for a Cook-reduction between L and L’ in either direction is barred in turn from doing so, and since no value of L’ remains forever undefined, this construction does what we have asked of it. Q.E.D.

I feel almost like there’s a game of Mornington Crescent going on that I can’t quite figure out. But then, enough of it rings bells to my distant academic past that it’s either really, or an extremely clever put-on…

That’s the fourth remark along such lines, but I’m a little surprised. This thread didn’t really strike me as particularly more obscure than any of the other math/CS threads which pop up not infrequently… I mean, there must be hundreds of threads on this board about P vs. NP.

I think in my particular case, it’s that I’ve forgotten a whole lot more theory than I remember. I vaguely remember once knowing about P and NP problems, but even the difference between “NP-complete” and “NP-hard” eludes me, never mind the concept of a language which is in one of these categories.

So, essentially just a drive-by joke, based on the fact that I stopped in on a thread which I thought might be interesting to me, only to be humbled by the fact that I’ve forgotten most of what I once knew, academically speaking.

Ah, yeah, atrophying memory’s a bitch; I sometimes look at old conversations I’ve had, and think “Whoa. I guess I knew that once…”. I think it may have even happened with some old threads here, and I’ve been here for less than two years! It scares me sometimes, that I might forget so much, but I guess that’s just what inevitably happens with time and shifting interests.

One thing which is tripping me up in feebly attempting to follow along is the terminology “NP-hard (or NP-complete, or P or whichever) language” as opposed to an NP-hard problem. I know, vaguely, what classifies a particular problem being in P or NP or whatever means, but I don’t really know what sort of “languages” you’re talking about, and what would cause them to be similarly classified…

A language is just a set of strings; it corresponds to the problem “Given an input string, output either YES, THIS STRING IS IN THE SET or NO, THIS STRING IS NOT IN THE SET, as appropriate”.

If you just replace “language” throughout with “YES/NO problems”, you’ll probably be fine.

Just dropping by to say that I’d like to post more about some stuff here but:

  1. No time.
  2. Fuzzy brain. Don’t want to make too many mistakes.

So look for the next episode. Same bat channel, same bat time.

Interesting comment. From what did you deduce that? Well, J. Otto certainly did study “complexity doctrines”, essentially toposes that implement various kinds of complexity, polynomial space, polynomial time, others I guess, that correspond to their initial objects. Unfortunately, it is completely unreadable, at least by me. But it really has little or nothing to do with the OP, since the objects of the topos (a very special kind of category) were not languages (and morphisms not reductions).

Of course, there have been many attempts to apply category theory to linguistics and there are active groups, especially in Germany and Italy, that pursue this question, but I do not know of anyone that has looked at the question of the OP. Mostly what they do is view a single language as a category (often a partially ordered set) and morphisms as reductions of the sort S –> NP VP, which is a symbolic way to say that sentences may consist of a noun phrase followed by a verb phrase (more or the less the same thing as saying a sentence may consist of a subject followed by a predicate). Of course, this analysis does not capture every sentence (why I said “may”) and this kind of analysis might fail totally in another language.

Right, of course; that was meant to be part of my digressing to mentioning possible applications of category theory to complexity theory more generally. But, like I hopefully indicated, I’m not very familiar with these.

(As for the “interesting comment”, I’ve sent you a private message)