Wendell, in response to the last two posts, thank you for the acknowledgment in #11, and I don’t really disagree with much of what you said in #9, so maybe we’re not at risk of getting into a big OT argument here! 
I’m sorry if you thought I was “dismissive” of your statement but this is what I was reacting to – emphasis mine for extra clarity:
The reason that Watson’s answers sound like nonsense as much as 10% of the time is that Watson is not intelligent in the sense of a normal human being. Watson merely parses the questions it’s given using its limited sets of parsing tools, looks for any sentence in its huge database with similar words, parses that sentence with its tools, and sees if there’s a close enough match in those two parsings.
You can see why I brought up the Chinese Room argument. This is exactly what Searle was trying to claim – that “mere” pattern matching or “mere” symbol processing doesn’t embody true understanding and therefore such a system is “not intelligent” (your words). And Searle’s claim was regarded as nonsense right from the start by most of the AI and cognitive science communities, as I pointed out in the referenced link, on many levels. He obfuscates the difference between a component of a system and the synergy of the system as a whole, and even more fundamentally, he gets into a facile semantic quibble about what “understanding” is supposed to mean. As Steve Pinker points out, we’re often reluctant to use the word unless stereotypical conditions apply (i.e.- human actors) but human intelligence is intrinsically computational, too, because it’s carried out by “patterns of interconnectivity that carry out the right information processing”.
Perhaps I misunderstood you, and if we put aside your initial argument and take up the question of whether technology at the level of Watson could realistically operate as a realtime debate fact-checker, that’s a different issue from claiming that it’s “not intelligent” and on that issue you may well be right – for now.
The difficulty is that Watson would lack a sufficiently deep contextual understanding to optimize its fact-checking. One could semi-humorously trivialize it by imagining that the candidate’s first words are “I’m happy to be here tonight” and off Watson goes in a frenzy of fact-checking about the veracity of the candidate’s claim to happiness as a direct causative effect of the quality of the audience or the alleged happiness induced by mere presence in the city or auditorium.
This is the kind of thing that AI skeptics use to scoff, yet if we can build and populate the right kinds of knowledge representations, deep semantic and contextual understanding begins to manifest as an emergent property of the system. How do we know this? Because humans do it with what often appear to be the same kinds of processes, and because systems like Watson do it, too, within particular skill sets and knowledge domains.
How hard it would be to adapt and train a system like Watson to do this well is an open question. I’d point out, though, that Jeopardy is no cake-walk either: many of the questions (“answers”) are worded as clever puns or other unusual twists of language, and the knowledge base that has to be mined is very diverse – it’s basically “all the stuff about the world that typical smart people might know”. Nevertheless I think you have some good points in the context of political debates, and in the near term if something like Watson were applied to debate analysis, it might have to be the way that one of Watson’s spinoff commercial products will be used in health care, namely as a clinical decision support system in which its role is to assist a human expert by providing highly customized, confidence-scored responses to human queries by mining a very vast knowledge base.