I think a lot of the issues with AI come up when we take a very discrete problem and beat it into the ground, I somewhat agree with:
[QUOTE=John McCarthy]
‘Chess is the Drosophila of artificial intelligence.’ However, computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies."
[/quote]
Now, just like the “brute forcing the human brain” thing, it’s not like I don’t think it’s useful, I certainly agree with:
[QUOTE=Drew McDermott]
Saying Deep Blue doesn’t really think about chess is like saying an airplane doesn’t really fly because it doesn’t flap its wings.
[/quote]
The point is, I agree with the full intent of the original quote, Drosophila are incredibly useful in biology, and chess can be incredibly useful in AI. The problem is, how many world champion chess grandmasters are there in the world? It seems like we’re beating these specific problems into the ground. I guarantee that given any random sampling of people, you can find more people who can watch two cartoons from the same show and be able to give a reasonable prediction of the type of antics they’ll see in the third one than you will find people who have absolute mastery over chess, or even knowing every possible literary trope that can occur in the episode.
AI isn’t playing chess or winning Jeopardy, these are TOOLS, but we get bogged down in making something that can win, not the thought processes behind these tasks. Yes, we model adversarial searches and database lookup and uncertain reasoning, but we’ve missed some of the great underlying principles such as: why can a person, given adequate preparation, be able to play both Go and Chess competently? I mean, we have a Chess computer that can beat world champions… and a Go program that can be beat by an 8-year-old. This isn’t the computer solving problems, this is people solving problems and then generalizing it for the computer. This isn’t wrong, by any means, but I think we need to step back and try to generalize MORE, find broader aspects of cognition, find a way to make the computer generate a heuristic for a given problem.
There are other things that people can do that are amazing, such as see a couple Wile E. Coyote cartoons and be able to discern from any further ones “Okay, he pushed a rock of the cliff, Coyote will likely get hit by the rock in some fashion.” Now that specific case is hard to solve because the modelling of percepts (i.e. seeing and cataloging the images, compartmentalizing the various objects into what they represent, hearing the sounds) is as or more complicated as the reasoning itself, but those kind of problems, that damn near any mentally fit person can do and yet present astounding insights into incredibly complex pattern recognition, are the kind that should be looked at critically. Hell, just figure out why a 5 year old can look at a stick figure and realistically interpret it as a human being despite the two barely having any similarity. These are the kinds of things that are going to really advance AI in a meaningful way, not solving any specific problem like Chess, Go, Checkers, or Extreme Mountaintop Egg Pasteurization.
Easier said than done, I know. We’ve been banging our heads against the wall on natural language processing for years. It’s just I think that that’s a better direction than optimizing adversarial searches.