When you say “accomplished by doing just that”, do you mean that various researchers decided to take different approaches, documented their results, and everyone continued building on each other’s work resulting in good progress over the last X decades?
Because that’s why progress has been made in those two areas and I would classify that as “cleverness and carefully designed experiments/systems”.
I personally wouldn’t include the traditional method of chess AI as AI. It’s a very task specific method that doesn’t come anywhere close to touching the general nature of a human’s ability to solve problems with chess being just one example.
I think a system that has that general problem solving ability and can sit down to any game, be told the rules and then improve over time, gets closer to what we think of as intelligent.
Which is what my undergrad research thesis is on right now. There’s been a little headway into the field (but it’s not very popular), one researcher managed to use a system to make a computer learn poker. I’m attempting to generalize it to all card games – though the computer won’t be explicitly told the rules, it will have to figure out the rules by observation and feedback (good move! That was wrong! Or it inferring it’s farther away from its goal, after it figures out what the goal is of course).
The interesting bit isn’t making a computer that can use probability to learn what predicates to use when, that can already be done. Slowly, inefficiently, and with a good degree of error, but it can be done. The trick is formalizing what it’s learning and then generalizing it. This breaks into a field of machine learning called Transfer Learning, which is the main area of the research project, given that it learns from the ground up how to play a simple card game (say a simple game where the only rule is that you take turns playing a card), how can it recognize when learning a new game what previous rules apply, what rules are similar, and what rules aren’t?
Basically, to simplify to a reasonable problem, some card games involve gaining cards, and some involve losing cards. It may learn in its first game that gaining cards is best – now we start a new game where you want to get rid of your cards. How can it recognize through observation of the game that certain rules are still relevant (i.e. taking turns, shuffling, dealing, not stealing from an opponent’s hand) but not others (you don’t want to gain cards anymore!)?
It turns out that saying “a human can do it” is a bit of an oversimplification. You have a lot of context a computer program doesn’t have. Children are good at learning rules, but infants aren’t. It takes a long time for a kid to internally formalize the specific methods of copycatting that allow them to recognize a good or bad move or figure out rules.
I personally am refraining from making any prediction about it, It just to hard to calculate with too many variables. I’m waiting until we have an intelligent computer, and then I’ll ask it when the singularity will be, it should be able to figure it out real fast.
That’s part of why progress has been made. Another part of it is throwing huge amounts of processing power and data at the problems.
Let’s take the concept of genetic algorithms as an example. You need clever researchers to create a framework in which you can write and assess genetic algorithms. There has been a tremendous amount of study and progress via the processes you mention. But you also need computers that can simulate millions of individuals and progress through the generations.
But what I’m getting at is that the system can now develop solutions to problems that are novel. Sometimes it’s quite hard to figure out why the evolved solution works. A popular explanation of a great example. You’re simply not going to arrive at the evolved solution by a reasoned understanding of how logic gates work. It is an emergent property of the simulation.
Obviously, simulating a brain is not going to be an easy task. And Kurzweil relies on Moore’s law holding out, which itself requires constant iterative design and experimentation. I’m not trying to discount any of the work required to advance human progress. But there are actually problems that can be solved without understanding how to get to the solution. If we can actually simulate a brain and if computers keep getting faster, we might not ever have to actually understand how the human brain works in order to reach a Technological Singularity.
Thanks for that link. I’ve heard that story before, and have occasionally tried to repeat it (in a half-remembered form) in discussions of molecular biology when somebody wants their particular system to make some kind of goddamn sense. I admit that desire is very tempting, especially when we can make useful analogies between biological systems and discrete logic. But those analogies are limited.
Like, say, with the human brain. To get back on topic, I do not believe that we can ever replicate the brain just by throwing more computing power at the problem. The brain is a massively parallel, analogue, stochastic, and chaotic system, and it is definitely not a Turing machine. A thousand times more computing power wouldn’t let us predict the weather much better than we can now, why would we expect such power to magically become intelligent or even conscious? Yet, the Futurists all seem to expect that increasing transistor densities will lead inexorably to The Singularity.
Is “the Singularity” now shorthand for boot-strapped self-aware AI emergence? Because back when I gave a damn about following such things, it was just the point where technological progress went ballistic and unpredictable.
Yes, those are useful techniques (I use them to evolve neural networks in creatures living in a 2D world) and they are currently and will continue to be part of the process for trying to create AI, I just don’t think it changes the equation, it’s already a known technique being used.
Not quite, but it’s a common theme in Kurzweil’s predictions and other more enthusiastic visions of “The Singularity”. More than just “ballistic and unpredictable” progress, “The Singularity” is supposed to be progress beyond any human comprehension, which invokes some sort of AI emergence or transhumanism.
I don’t know why anyone would peg a specific date on when a fuzzy concept like ‘the singularity’ becomes reality. It will probably happen some day. I’ll go out on a limb and say either it happens sooner than we expected, or it takes longer than we thought. But don’t hold me to that.
I always thought that, almost by definition, the singularity would rush onto us increasingly quickly. So if it were only 5 years away most people wouldn’t appreciate it.
I have not read it. Well, I looked it up and now I’m about halfway (17 pages) in. So far I mostly agree with it – though I lack the background to really have the context to evaluate all the claims he makes.
At this point in the article, I’m not sure if I’ve even gotten to his main argument so I’ll refrain from answering, but so far I reject the premise that we should step away from representation (though I’m not sure he’s arguing it so much as he’s still going over history).
Even in humans, everything is representation. I agree we should step away from the model of “the world and the brain, with a dummy body playing the intermediary”, but just because humans rely on hormonal and organic states in addition to brain and neurological states I don’t think invalidates the premise that we need to step away from representation. If anything, it just shows that our representation is woefully inadequate and we need to step back and model more than the brain. To reject that intelligence can be achieved with representation is to reject that a computer can be intelligent, because overall everything has to be represented in order for a computer (and in my view, a human as well) to take any action at all.
But Pat Winston and the rest of the people at the MIT AI Lab did.
That was my point. Though all the projects AI people were working on 40 years ago succeeded, the bigger project of actually implementing AI has made very little progress. We read Minsky’s Frames paper, which was interesting but didn’t go anywhere. Is there even a theory of AI today? Not that I’ve heard.
Genetic algorithms are just a way of traversing a search space, just like more traditional algorithms or similar techniques like simulated annealing. Once in a while these techniques will come up with something new and interesting hiding in the search space, but for the most part they lose out to actual algorithms big time. You can throw them at all sorts of new problems, but their life span for them is fairly short. And they have as little to do with true AI as chess, which is yet another search problem, at least the way it usually gets implemented.
In fact, think of a GA that plays chess. It can be done - and once in a while it might come up with a brilliant game - but I think in general it would suck as compared to targeted chess algorithms.
I think one problem, philosophically, with AI is that once anything is sufficiently analyzed it loses its magic. While we treat chess as a search problem, I think it’s a perfectly valid way to represent it. There’s no reason to say it’s not a valid form of intelligence. A targeted, specific form of intelligence, but intelligence nonetheless. Of course, it’s orders of magnitude less intelligent than an adaptive, general AI, but I think that no matter how intelligent you make something, as soon as it involves a specific technique and algorithm you become disenfranchised a bit and more reluctant to call it “true AI.”
That said, chess has been a terrible field for AI, the horse has been killed, beaten, preserved, and used as a pinata.
I don’t think Voyager was saying evolving a solution can’t work. What he was saying is that an algorithm for a particular problem will typically beat out an evolved solution/get there quicker. For any method of finding the solution, all possible solutions sit in an N-dimensional space. A good algorithm jumps right to one of the solutions, whereas GA or evolving a solution requires traversing some portion of that landscape and testing the possible solutions until a pretty good solution has been found.
Nature’s only solution is evolution, so that’s what was used to produce us. But humans can use their cleverness to create shortcuts that zip past the process of evolving solutions, although, as you noted, there can be evolved solutions that beat our designed solutions.
We have two tools, evolution and cleverness/algorithm, nature just has one tool.