Turing's SN proof for simply typed lambda calculus
My ability to Google correctly has failed me completely this week. Thankfully there's a few logicians/CS types here who may know what I'm talking about.
I have a simply typed context-calculus and I'd like to prove that it is SN. A proof of strong normalization is normally quite lengthy, but thankfully I remember reading a short (about one paragraph) proof given by Turing that the simply typed lambda-calculus is SN (though I may be completely misremembering, and it may be for WN). If I remember correctly, this was never published, but I have found reference to a paper by Gandy where he outlined this proof (which I also cannot find online). In the past, I have read the outline on a mailing list that I managed to pull up through Google. However, I cannot find this for love nor money now, and am now having a mental block, completely forgotten how Turing went about proving it.
Can anybody here either outline the proof, or point to an online reference for it? Wikipedia isn't helping (it normally has references to these things at the bottom of articles), I've search the TYPES mailing list, everything. Help!