Why were early computer researchers sure that real AI was just around the corner?

I mean here are really smart guys in the 50’s, 60’s, and 70’s working with room sized computers that would be outclassed by the CPU in a modern singing birthday card, thinking that some form of real AI was just 10-20 years away.

Now in 2008 we’re struggling to implement levels of true AI that might match a cat’s intelligence on a good day with computers that are many orders of magnitude more powerful that what they had.

How did they so hugely misjudge the scope of the problem?

I’m sure other’s will be along with a definitive answer, but the first thought that comes into my head is: That’s just the way people are.

Look at the number of problems that smart people thought would be solved by now: Cancer, baldness, fusion power, cheap space flight, world peace. aging, etc.

Even people who should know better always seems to underestimate the complexity of problems. With AI, even the definition of the problem is hazy, so a solution is that much more difficult.

As they say, it’s been 10 years away for the past 50 years.

I think it’s more indicative of the phenomenal pace of information development in the (relatively) recent past. It wasn’t known (and of course still really isn’t) even what the problems were that would require solutions, much less what those solutions were.

There are a whole bunch of reasons - for one thing, there was just a great deal of optimistic futurism going around about everything - we were the masters of electricity, chemistry, the atom - every problem was going to yield to us through the application of technology. We weren’t just going to have thinking machines, we were also going to have flying cars, streets with moving walkways, travel by vacuum tube, atomic ovens, holidays on the moon - we were going to eliminate starvation, disease and poverty - our future world would be so mechanised, we would spend most of our time at leisure.

More or less all these predictions have a couple of things in common:
-Scant in-depth appreciation of the problem to be solved
-Infant grasp and application of the relevant developing technologies

We haven’t grown out of this habit, either - it still happens over subjects like nanotechnology, power generation, genetics and medicine. Not sure it is, or ever was, the fault of the folks actually intimately in contact with the developing technologies, but rather, sensationalist optimism on the part of a larger community of writers.

I was just going to say, Mangetout, where’s my effin’ Flying Car?

I think it’s because it looks easy, and its looks easy because (almost) every newborn human infant learns all these complex things, like running and talking, within jut a few years, without even being deliberately trained. If babies can do it, why can’t computers, which calculate so much faster, do it to? And the answer is probably that babies are hardwired to learn these important skills, while so far computers aren’t.

A few things off the top of my head:
[ul]
[li]Computers were really new: The people who were making those bets were well beyond the Giant Adding Machine myth in how they looked at computers. The problem was that there wasn’t a solid next step for them to go yet. All they knew is that computers were making a lot of things much, much easier and that it was obvious they would only get better.[/li][li]Eliza was convincing: For those who don’t know, Eliza, one of the earliest ‘conversational AIs’ (chatterbots), did a surprisingly good imitation of a Rogerian psychologist. In reality it was just shuffling strings around but humans could get quite attached to this endless well of sympathy. The Eliza Effect is thinking there’s more depth to something (a book, a program, a politician) than there actually is.[/li][li]Chomsky was convincing: Noam Chomsky’s work on formal grammars is unmatched and unquestioned to this day. However, he had the arrogance to put human languages up at the very top of his formal language hierarchy, implying that improving our ability to parse computer languages would lead to a workable ability to parse human languages and real AI can’t be far off then. His further bloviations on the ‘Universal Grammar’ supposedly encoded in the human brain only advanced the notion of simple deterministic solutions to human language.[/li][li]Google didn’t exist: Google is the current hotbed of AI research, but nobody calls it that because everyone knows AI died in the 1980s. However, Google is practical and relies on working AI, not Government-funded AI. The differences between the two are one reason I look at Government-funded medicine so skeptically.[/li][/ul]

In the 50s, 60s, 70s, **WERE ** serious computer experts predicting “real AI” in 10-20 years? Or was it the journalists and the SF writers?

When I started on electronics in the mid-70s and 80s mostly I remember the message being that it was not easy and not going to happen soon.

I wonder if a motive for researchers (unacknowledged even to themselves) was to practical one to secure support?

1950s researcher: We will have (AI|nuclear fusion|flying cars) in (10|20) years’ time
1950s grant-giving body/private investor: Here’s a lot of cash, help yourself.

1950s researcher: We will have (AI|nuclear fusion|flying cars) in 100 years’ time
1950s grant-giving body/private investor: Thank you for your time. Don’t call us, we’ll call you.

[thread=468420]Aargh![/thread]

I think part of the problem, perhaps not all of it, was that many early researchers thought that it was simply a matter of computing power, and they all thought that we would eventually get enough.

You mean, the definite answer is a mis-use of an apostrophe!
AMAZING!

This attitude is what gave rise to Modernism: we’re on the verge of utopia, our big problems are being solved one after the other, there’s nothing we can’t do, the sky’s the limit, and so on. It didn’t last very long, because after the engineering problems were fixed (how to build large bridges and tall buildings, defeating easily-cured diseases, etc), we ran into the recalcitrant stuff (cancer, war, poverty, hunger), and realized the optimism was misplaced. That gave rise to the Post-Modernist movement, which is a descriptor much misused today. These days, you’ll hear it as a synonym for anything “meta,” which is itself a vague and much-misplaced term; “ooo, that’s so post-modern,” or whatever. Postmodernism was specifically a response, in the visual arts, to the rosy pie-in-the-sky dreams of a technological paradise in the making, and has at its core the philosophy of “we don’t know as much as we think we do.” In architecture, for example, Modernism created buildings that are sterile reflections of, or deliberate showcases of, the technology that makes the structure possible; postmodernism puts back some of the functionless ornamentation, sometimes with a humorous wink, or sometimes with a shrug (“we don’t know why humans prefer this, but they do”). As the general population became more disillusioned with the inability of the technology brokers to deliver on their early promises (where’s our flying cars? thinking computers? immortalilty?), the complex ideas of postmodernism caught on in a simplified and mutated form, and expanded beyond their origins.

</art geek>

All of which is a roundabout way of answering the OP: If you want to see where these promises originally came from, do some reading on the theme of Modernism.

Yeah, there were serious researchers who considered AI to be just around the corner. Perhaps the most famous of them was Turing himself, the father of AI, who predicted within 50 years:

Further, Marvin Minsky, in the 1960’s, once asked (seriously) a graduate student to “solve the problem of computer vision” over a summer.

So clearly, the idea that AI would be a solved problem within a few decades did hold some sway amongst serious researchers in the field.

The field was filled with this sort of optimism all the way into the 1980’s, and was one of the reasons why the AI Winter occurred. You can only promise the earth, yet fail to deliver, so many times before funding and commercial interest begins to dry up.

As for why there was so much optimism, well, there were large advances early on in the history of the field. It’s pretty easy to take a very simple domain, like the blocks world for automated planning, find some success there, and infer that extra computational power will allow you to generalise your program to solve arbitrary problems.

Unfortunately, generally, this isn’t the case—there’s all sorts of combinatorial explosions that happen when you start to make your domain more complicated, and these in turn require more than a brute force approach—they require clever heuristics and search control strategies.

From the fortune files:



.. in three to eight years we will have a machine with the general
intelligence of an average human being ... The machine will begin
to educate itself with fantastic speed.  In a few months it will be
at genius level and a few months after that its powers will be
incalculable ...
                -- Marvin Minsky, LIFE Magazine, November 20, 1970


Anyone who wants to verify that should be able to.

IIRC, both Alan Turing and Marvin Minsky are on record with saying something to the effect that “computers will equal or surpass human intelligence in [10|20|50] years.” I’m pretty sure that Turing’s statements can be found in the Computing machinery and intelligence essay (which proposed the “Turing Test”); here is a blurb with some Minsky quotes. I don’t recall any specific statements by others (e.g., McCarthy, Newell, Shannon, etc.), but I’d find it surprising if there aren’t any. More to the point is to look at “The Dartmouth Conference”, from which emerged both the term “AI” and specific research directions (mentioned, for instance, in A Brief History of AI).

To the OP’s question: others’ responses are good and cover a lot of ground. I’d venture to say that it was hubris, resulting from a, loosely stated, Aristotelean perspective. That is, the gauge of “intelligence” that most people use is ill-defined, but can be satisfied by defining it as “what a human can do (that other entities cannot)”. (Note the parenthetical, which is responsible for much of AI researchers’ grief; once a machine can do something, goalposts are shifted and it is no longer considered a mark of intelligence.)

“Logic”, which I’m using as shorthand for “higher-level cognition”, is often cited as “what separates humans from other animals.” Aristotle (again, speaking loosely) elevated logic to an ideal; designing a machine that can “do logic” was thought to be designing a machine that is intelligent. Little did they consider the role of (and difficulty involved with implementing) emotion, “common” knowledge, embodiment, and a host of other contributing factors.

Part of the problem is that folks stopped trying, some time in the 1980s. Forget Eliza-- That wasn’t an attempt at intelligence, it was an attempt at looking intelligent. Take a look at some of SHRDLU’s dialogs sometime: It really was fully capable of interacting with its world, and discussing its interactions with humans. Its prime limitation, of course, was that the world it inhabited was so incredibly simplistic.

Part of the problem was that computers are really good at doing things that even intelligent people find difficult. It’s really hard to manually work your way through the differential equations that describe the trajectory of an artillery shell, but a computer can find the answer in the blink of an eye. If a computer can outperform a smart human on a mental task like that, the computer must be even smarter!

What early researchers didn’t appreciate is just how different the architecture of the brain is from the architecture of a Von Neumann computer. Math is hard for us because our brains aren’t really wired for it. So only really, really good thinkers can manage higher level math at all. But math is easy for computers. So a really weak, stupid computer can do very powerful math.

Most of what we consider AI – natural language processing, understanding human intention and motivation, etc. – are things that are hard for computers to do and easy for our brains to do. So for early computer researcher the idea that something as trivial as indentifying an object against a pattern background could we be a difficult problem was hard to grasp.

We are significantly better than computers at math (although the automated theorem provers are closing this gap). What they beat us at is computation. Trust a computer to find the solution to a particular differential equation, and a human to prove that classes of differential equations have unique solutions.

Since computation is part of mathematics, I think it’s better to say that computers are better at some areas of mathematics, and humans are better at other areas. The areas that computers are better at are those with well-defined algorithms, and lots of calculations. Humans are better at identifying patterns of various kinds.