Actually, the BeepKillBeep official seal of approval is much appreciated! 
OK, a few things here. One is that I have the impression that you didn’t really read my post #65, and particularly the last paragraph, because it addresses this issue.
Second, your continued obsession with ALUs is a completely irrelevant distraction. No one cares at this level of discussion how CPUs are built. The real question is, can intelligence arise as an emergent property of computational methods, or alternately, can a Turing-equivalent machine exhibit intelligent behavior? The implementation of the Turing-equivalent platform is irrelevant, by definition of what “Turing equivalent” and “computation” means.
Now it’s pretty clear (again, post #65) that most computer scientists and AI researchers believe the answer is “yes” and that indeed such intelligent behavior has already been demonstrated and is rapidly advancing. It actually seems really hard to argue that when a system like IBM Watson outsmarts the smartest people in the room, that it isn’t exhibiting intelligence. Some might argue that apparently intelligent behavior really isn’t, because a computer is doing it, but that’s just a meaningless semantic quibble.
The more interesting question is whether there are inherent limits to what computational methods can achieve, which revolves around the question of whether a system performing syntactic operations on symbols can, at a sufficient level of complexity, eventually develop true “understanding” as an emergent property. This is what John Searle’s Chinese Room argument tries to answer in the negative, but it’s an argument that most computer scientists consider misguided and silly. That link is a good read if you’re interested. My own take on it is that the “systems reply” adequately demolishes the argument, and Searle’s attempt to refute the systems reply is deeply flawed.
One way to look at this issue is to consider the principle that a sufficiently great quantitative change in the complexity of a system yields qualitative changes in its fundamental properties. There is a species of roundworm with about 300 neurons in its brain. Few would say that this creature is intelligent or sentient. The human brain contains some 86 billion neurons, and a correspondingly vast number of synapses, and what a difference that makes! It’s not the neuron that’s magical, it’s the sheer numbers that lead to rich information content and powerful processing capabilities.
In the same way, a calculator with programmatic features that let you save a sequence of, say, 32 instructions including test-and-branch instructions might be able to do a few simple tasks, but one could make a convincing argument that whatever it did, it wouldn’t be exhibiting intelligence. The argument would sound much like the one you’re making, and here it would be right. But if you look at IBM’s Watson system for the Jeopardy challenge and the DeepQA framework it uses, it has about a million lines of code in 130 major software components, running on 90 IBM Power 750 servers providing 2,880 parallel execution threads threads and 16,384 GB of RAM stuffed with code and data. This is now something fundamentally and qualitatively different than our programmable calculator, not something that in any meaningful sense could be called “a simple process”. It’s not quite yet on the scale of a roundworm to a human, but we’re getting there. If one still quibbles about “real” understanding and “real” thinking, there is some future generation, some future iteration, of such systems where the quibbling will stop, because the evidence will not only be irrefutable, but we’ll be seeing real creativity, and the first signs of self-awareness.