Cecil had a sort of throwaway line in that report that I must comment on, where he said “However, all this strikes me as the equivalent of teaching a computer to beat people at chess — a neat trick, but not one that challenges fundamental notions about human vs nonhuman abilities.”
Oh, Master, how disappointing to read that from your esteemed font of wisdom!
True, several things can be said in its defense. Since there do exist some philosophers and even cognitive scientists who hold such an opinion, it cannot be said to be “wrong”. But most people who study cognition and machine intelligence would strongly disagree with it, so it’s at least wrong to state it as a point of fact rather than opinion.
The central issue revolves around just how high you must set the bar on the continuum of intelligence before it’s acceptable to declare it “human-like”. At one point around the late 50s and early 60s many were doubtful that a computer would ever play really good chess, and their reasoning was sound. It was easy to see that a computer could be programmed to follow the rules of chess, and it didn’t seem particularly challenging to evaluate the consequences of each possible move at any given board position. But there’s so much more than that to playing good chess, because the best-seeming move in the current position may turn out to be a terrible move a bit later down the line, and the search tree in doing evaluations for multiple moves grows exponentially to where it very quickly becomes computationally impossible.
Their conclusion, therefore, was that since it wouldn’t be possible for a computer to “see” more than a few moves ahead, there was no way it could ever play more than a mediocre game of chess. The question of how people do it remains a mystery; it seems that grandmasters have some kind of strategic mental picture of the game, but it’s not one that they can articulate. They can teach an average player to become a much better player, but they can never teach anyone to reach world-class levels. You either have this intangible skill or you don’t.
Yet this is precisely what the best modern chess programs like Deep Blue have achieved. And they do not – and cannot – do it with brute-force look-aheads. Part of how they do it can be broadly described as a large set of heuristics that are used to trim the search tree and to evaluate positional strengths. In a very narrow sense Cecil is right that such heuristics might be described as “tricks” to optimize the results, but this is a very misleading way of looking at it. In their totality they are more like a model of a chess master’s learnings and indefinable strategic vision of the game. It’s the end result that matters, and if we agreed in advance that such a result represents intelligence – and indeed if some people argued years ago that computers would never be able to do it for just that reason – and if computers then in fact do it, we have to acknowledge that they’ve achieved intelligence at some level.
There is a tendency among some to view any process that we can understand the inner workings of to be mechanistic and not “true” intelligence no matter how impressive its results, such as the Watson “Jeopardy” champion. But this is just wrong, and it reflects what Marvin Minsky once said was the phenomenon that “when you explain, you explain away”. Someone from decades past would have no doubt at all that Deep Blue was genuinely intelligent, but today, some people reject that view simply because they more or less understand how it works. But the reality is that there is a widely held theory in cognitive science that even human intelligence at its core derives from computational processes, and a whole field in computer science called computational intelligence which seeks to advance machine intelligence through techniques that are sometimes closely analogous to human cognition.