Why were early computer researchers sure that real AI was just around the corner?

If it works, it isn’t AI.

Some of the most motivated pattern-matchers outside of Google are the people writing spam filters. Nobody calls what they do AI but by the standards of the pre-Winter era it qualifies.

My point? AI isn’t as dead as it seems to be. We did indeed hit a wall in some very important areas (knowledge systems remain stupid, computer vision is almost totally blind, automated driving is worse than George Michael, etc.) but other AI areas (natural language processing, character recognition) are proceeding at a useful rate. The golden rule persists. I wonder when posting to the SDMB will stop being AI.

I wouldn’t go that far. A lot of motivational examples for machine learning courses, undeniably AI, are taken from spam filtering, especially when presenting Bayesian inference.

In other words, early computer scientists, familiar with the tenets of cognitive psychology of the day, believed that aspects of human thought such as memory, visualization, language, et cetera were reducible to simple, easily modeled principles which, once mastered, would lead to more complex cognitive phenomena such as conceptualization, proprioception and spatial sense, and ultimately sentience would fall out of the mix. It turns out, however, that even very simple memory formation–not integrated sense event memories but just basic behavioral action-response modification–are incredibly difficult to understand on the neurochemical level, to a point that it has only been in the last decade or so that even this phenomenon has been satisfactorily described, in models that are not at all like the way computer memory works. And the nature of cognition is tightly bound into the functioning of the brain; human thought and sentience are not software running on hardware, or even discrete instruction sets embedded in the brain; it is, instead, an implicit part of neurochemical function, a series of processes built upon one another until there is a suitable level of abstraction from basic mechanisms to give variation sufficient for independent choice (or at least what appears to be free will).

It may be possible to simulate this as a function set abstraction on top of hardware–it’s certainly possible to make a self-modifying and self-interpreting algorithm–but making it do something useful is more tricky, especially when we only have a very limited understanding of the mechanisms that evolved for cognition over more than 500 million years. Like making a machine fly, it seems very simple to start out with–hell, most birds and many insects do it–but you’ll notice that we still don’t have usable flying cars despite decades of Popular Science technoporn spreads promising just this.

Stranger

We do have flying cars: we just call them aeroplanes, but they cost too much for the average person to use for their 10 mile commute, especially in the land area that would be needed to take off and land, so the present-day Jetson family does not fly towork. In addition, they don’t fly much like birds or insects do – we don’t have planes that fly by flapping their wings.

Tell that to the Society for Putting Things on Top of Other Things.

Another big research area is in games. AI bots for controlling NPC’s, opponents in RTS games, etc.

But in general, once an AI problem is solved, it’s no longer considered AI.

Kinda like magic. Once the physical phenomenon is explained, it’s not magic any more. “It’s not levitation, it’s the Biefeld–Brown effect!”

Game AI is pretty simple stuff – mostly pathfinding and finite state machines. It doesn’t have much connection to academic AI research.

This was interesting

Second Life’ is frontier for AI research
Artificial intelligence tests use virtual world’s controllable environment

Which illustrates the point exactly. Why does the technique used matter? AI is all about imitating human intelligence.

What I was pointing out was that is that there’s not much RESEARCH going on in game AI. Academic topics for AI research – neural nets, natural language processing, that sort of thing – don’t get used much in videogames. The tech that’s used for videogame AI is pretty straightforward and well-understood.

I agree that a lot of it was probably optimism that was informed by an interest in getting research money. That’s just human nature.

Have you ever known a scientist to announce something like this:

Not familiar with very many string theorists, are you? They’d kill to have something they could hope to bear fruit in a single lifetime. Heck, even in the gravitational wave field (still on firm theoretical grounding, unlike the string models), we do work on instruments two or three generations more advanced than the one that we don’t expect to get launched for another decade.

They’d kill for a basic grasp of falsifiability and a world beyond over-complicated mathematical constructs.

String theorists are now hard at work building a more powerful Witten Mk II model. Originally, they were just going to upgrade the original machine with more memory and a faster processor, but it turns out that you also have to change out the motherboard and integrated math coprocessor, plus when you buy the new model they throw in the Games Pack with Calabi-Yau Minesweeper and Poincaré transcendental chess.

Stranger

Nope, but I’m very familiar with human nature. Do a google search on “string theory” and “press release”

Here’s an example

Derleth writes:

> Chomsky was convincing: Noam Chomsky’s work on formal grammars is
> unmatched and unquestioned to this day. However, he had the arrogance to
> put human languages up at the very top of his formal language hierarchy,
> implying that improving our ability to parse computer languages would lead to a
> workable ability to parse human languages and real AI can’t be far off then. His
> further bloviations on the ‘Universal Grammar’ supposedly encoded in the
> human brain only advanced the notion of simple deterministic solutions to
> human language.

Chomsky is a con artist. His early work on generative grammar only explained tiny portions of human language. Whenever some linguist would show how poorly human language matched up with his theories, he would postulate some new mechanism to explain the discrepancies and leave it to other linguists to try to make that mechanism work. Like AI, his theories on grammatical phenomena were always more about promise than about current explanation. His theories about other aspects of language like universal grammar are even more tenuous.

Wendell Wagner: In other words, Chomsky is a string linguist.

I can think of two factors:

  1. the 1950’s were a time of tremendous optimism-we had won WWII, nuclear power plants were going up, and AI was seen as amenable to being solved (by throwing money at it). As with many other things (fusion power), it took a long time to recognize just how difficult these things were. Fusion power is always 30 years away-it has been since the 1950’s
  2. The pioneers of cybernetics 9does anybody use that word anymore?), guys like Norbert Weiner, R. M. Fano, etc., thought that the human brain could be modelled using (relatively simple) mechanisms like feedback, simple association. it turns out that we really don’t understand the thought process very well.
    On the other hand-why do we need to reproduce the brain? An airplance doesn’t fly the same way as birds-and is just as useful. i see the future as machines augmenting the human brain.

Only some people were interested in making computers more like human brains. The main thrust was (and is) making computers think like humans, regardless of the underlying processes.

Licklider had similar thoughts in 1960, when he wrote Man-Computer Symbiosis. The concept of computers as brain augmentation is an old one and it’s something that has come true.