Why is everything hackable?

[Quote=ftg]

Plus, the Halting issue is just a mere minimum of correctness. Verifying that the program is otherwise correct is a task of such immense size that monumental doesn’t begin to describe it.

[/QUOTE]

Large pieces of realistic software are now routinely verified against a functional specification. The seL4 micro kernel and CompCert C compiler are recent examples. The halting problem is simply irrelevant in these cases as they are written and verified in normalising, non-Turing complete languages like the Calculus of Constructions.

Sure, I make mistakes. What’s the Halting Problem got to do with it?

It is indisputably the case that any reasonable proof system does in fact contain proofs, for particular programs P, that P halts on all inputs (or other such nontrivial things). The assertion that no proof system can do this for any program is the only thing I entered this thread to refute. I have nothing to say about real-world security considerations.

I’m not sure why you keep bringing up the Halting Problem. The HP is a theoretical limit of algorithms in general, not specific ones. It is absolutely possible to prove the correctness of a single algorithm. Here’s a quote from wiki:

The HP is irrelevant for practical programming.

Correctness proofs?

Good grief. When I was a referee I routinely (as in 100% of the time) ran into algorithms with formal proofs of correctness that were flat out wrong.

The inherent nature of bugs in software is unavoidable. How do you know your program checking software is correct? It’s software! It’s going to have bugs!

I keep bringing up the Halting Problem because it’s a basic condition for simple programs. If you can’t provably 100% guarantee infinite loops in any non-trivial program, all other questions of correctness are beyond hope of proving. No amount of “I program real programs” handwaving is going to magically bypass it. The complexity of analyzing real, actual, true-life, yes-I-mean yours, programs grows at an astonishing and then some rate.

So, in The Real World, people spend a limited time checking for bugs. At a certain point, the boss wants the product to roll, doesn’t want to spend more money, etc., and the software goes out.

Bug reports roll in. Only a fraction of them are considered worth tracking down and fixing. The rest are on the “whenever” list. And since there are far more undiscovered bugs than discovered ones, Bad People can find new ones and exploit them.

Certain strategies can make things somewhat better or worse. E.g., whatever Adobe is doing is apparently the worst possible way to prevent exploitable bugs. Which is a big relief for the Java people at Oracle who’d be #1 on the terrible programmers list.

There is no silver bullet.

Then you saw an incorrect proof. I’ve also seen incorrect proofs of the infinitude of the primes. Therefore all number theory is worthless.

ftg, it seems like the last twenty or so years of computer science research has passed you by. Operating systems and compilers are now routinely verified in proof assistants like Coq and Isabelle and the verified software is in use in safety-critical applications in industry. The CompCert C compiler, for instance, is now used for compiling avionics software at Airbus after receiving its DO-178B certification, clearly the “real world”. seL4.verified, a microkernel verified in Isabelle, is also in use in industry.

Proofs of correctness are carried out interactively. A human user guides the proof of a specific piece of software. Decision procedures and automated heuristics help in discharging “easy” proof obligations. Others are left to the user with the software checking that the user’s proof is correct.

The halting problem doesn’t apply because nobody is sitting down trying to write down a decision procedure that will uniformly decide whether an arbitrary program in a Turing-complete language will halt. It’s simply not needed, and the fact you keep yammering on about it indicates you don’t know what you are talking about. For a start, as I’ve already pointed out, the languages that the likes of CompCert are written in, the Calculus of Constructions and Martin-Loef Type Theory, are not Turing-complete, and all admissible programs written in those are strongly normalising and therefore simply do not have any comparable notion of the “halting problem” at all, a fact that can be established by meta-theoretic means (search for Jean-Yves Gerard and reducibility candidates for an overview of the proof).

Put it this way, ftg: Why do you believe that Turing’s theorem is true? Turing proved it, of course, but isn’t it possible that he made a mistake in his proof? And if Turing can present a proof that is reliable, then why can’t some guy trying to prove his code correct can’t provide a reliable proof?

Oh I’ve seen some code that appeared to rely on the HP.

I believe it because Alan Turing was much, much smarter than me.

And it’s been a seminal work in the field for 2/3rds of a century now, and nobody has reported any errors in it.

The NSA briefings give many reasons.

The Halting Problem isn’t horribly relevant. The Halting Problem doesn’t have much with proving correctness, it simply states that such a thing is undecidable. Meaning, a Turing Machine can not determine whether an arbitrary Turing Machine program will halt.

Due to the Curry-Howard isomorphism, this does mean that in certain formal proof systems (constructive logic, specifically) can’t decide whether an arbitrary program will halt, but it doesn’t mean it’s entirely unprovable in every proof system, merely ones proven to be isomorphic to a Turing decision problem.

That said, even under constructivist logic, you can certainly prove properties about many well-defined subclasses of algorithms. In fact, the language Coq is not Turing Complete, and in fact everything in the language is guaranteed to halt eventually.

Any program that is verified correct but erroneous is either a defect in the proof or a defect in the definition of your defined subclass of algorithms, not a “ha! gotcha!” example of illustrating the halting problem. The Halting Problem has never been a forall statement, it’s a statement that there exists at least one algorithm whose halting status cannot be verified by a Turing Machine.

The tractability of writing a tool that either verifies that a program either halts or prints FUCK IF I KNOW is another matter entirely, and is one reason why static verification is, y’know, an open research area that many companies like Microsoft pour mad bucks into.

Oh, that’s easy. Just return “FUCK IF I KNOW” for every input. That way, the program will always return a correct output.

I should have perhaps said “useful program.” :slight_smile:

Another thing is that while many common languages are (arguably) Turing complete, no physical machine is. A Turing machine has an infinite amount of memory and a physical machine doesn’t.

This ruins the proof of the Halting problem, because it means the g(i) function is not always computable in practice; all the programs that almost fill the memory don’t have room for the additional g(i) wrapper function.

It’s also trivial to check whether a program halts or not on a finite computer; you just run the program and wait 2[sup]N[/sup]+1 steps, where N is the number of bits in the internal state. If the program has not halted at that time, then it must have repeated its internal state (by the pigeonhole principle), and therefore will loop forever.

Not so trivial, though, if you want to check via a program on that same finite computer. :slight_smile:

I know you’re effectively kidding, but given multi processors and internet connectivity, the N you’re counting extends far beyond the RAM plus registers of the instant machine.

I agree with this.

I will quibble with this. I don’t think Curry-Howard or constructive logic are relevant, as such; classical logic is just as subject to the constraint being noted. Any proof system for which one can computably enumerate the statements considered proven (as is true of both classical and constructive logic) will be unable to prove “X doesn’t halt” for precisely those programs X which do in fact not halt.

I don’t quite grasp what you’re saying here, but it sounds wrong to me. The Halting Problem is definitely a forall statement, in the sense of saying “There is no Turing machine T such that for all algorithms M, T correctly determines whether M halts or not”. There is no algorithm M whose halting status cannot be verified by some Turing machine (just consider the one tailor-made to say “Yes” or “No” on that guy, according as to the reality of the situation); it’s just that no one Turing machine will do the trick universally.

I got my degree in CS 25 years ago :eek: We were introduced to formal methods and languages like Z and VDM. Not that I now remember them, of course.

Anyway, something that hasn’t been mentioned upthread is the physical hardware. Even if the code has been formally verified etc, you’re dependent upon the hardware running that code correctly. Remember the famous Intel FDIV bug some years back? And beware conversions between decimal and binary going awry, not to mention hardware rounding errors!

(Unless by an algorithm’s halting status, you intend an algorithm to be interpreted as taking in inputs, and its halting status to be the determination of which inputs it does and doesn’t halt on. In that case, yes, there is one algorithm whose halting status is uncomputable; for example, the algorithm which takes in as input a program (with all requisite inputs for this program provided), and runs it. But that this one algorithm’s halting status is undecidable is simply the same as noting that there is no universal halting status decider…)

And then there’s the backdoor. It’s a staple of badly written spy thrillers, but they exist in real life. And have done for a very long time.

Back in WWII, the allies were busy trying to break the German Enigma code. You already know the story - Alan Turing, Bletchley Park, Bombs, Colossus, U-whatever and the captured code book etc.

All this is true and was very valuable. But so was the weather. Each U Boat was required to transmit a weather report. Every day. In exactly the same format. Once the Allies realized this, it provided a backdoor to help crack the codes.

That’s not a backdoor.
That would be something of a known plaintext or at least a known format. Which is indeed a great help in decrypting messages.