I am reading the book the singularity is near which is extremely interesting. The author talks about how in his view due to exponential increases in technology by the mid 2040s we will experience major revolutions in technology that totally change organic life, due in large part to AI. This makes me wonder about current software and how it relates to technological innovation because I really don’t understand how it works. What all can we scientifically accomplish with modern computer software.
The book reminded me of something I read in popular science a few years ago. An inventor was working on stirling engines and he put the laws of physics and the goal into a program and had the program run through hundreds of thousands of possible designs until it came up with one that was only 50% as efficient as the $50,000 model but only cost $100. My brother also told me that computer designing comes up with circuit board designs far better than anything designed by humans. I have also read that humans only catch 98% of spam emails when exposed to the subject heading, but computer programs catch closer to 99.5%. So as it stands it seems computers can design some things more effectively than humans. How efficient and how pervasive is this kind of soft AI in 2006? Are we already at the point where you can tell a computer the goal, set the parameters and have it find a way of getting there that is far more effective than a human mind can come up with? Or does that only apply to things where we can easily control the parameters and understand them like engineering and not to more complex things like socialization, pharmacology and sociology?
You seem to be promoting the successes of AI rather than the failures which is basically everything else. Good AI is the Holy Grail in computer science and it hasn’t budged all that much from 20 years ago. Computers got faster so we can compensate by brute force and speed but the results are abysmal compared to what was envisioned 40 years ago.
Some of the stuff you describe isn’t AI at all. Massive looping programs to try out every conceivable design of something with few parameters is just a dumb, brute-force solution to a physics problem. It does help with some designs but that is hardly an AI solution. An AI solution would learn and come up with an elegant way to solve the problem using techniques that humans wouldn’t think of. That generally does not happen. Your circuit board example is not AI either. Than is just using very specific rules to solve a large yet uncomplicated problem at a basic level.
Computers are just really fast dumb tools and they are way less self-sufficient than your average John Deere tractor. I know and it is my job to keep big company software functioning every day. Not a day goes by when it does that on its own and that is why I have great job security.
Count AI as the biggest tech failure of the 20th century and it still may be a good contender for the 21st as well.
Absolutely not. That is how computers like Deep Blue do it though and that is why they aren’t AI like originally envisioned. No human has the capability to go into true looping mode and run through millions of different scenarios piece by piece every move.
Human chess players see patterns and subtle similarities from what they have seen before. They probably can’t describe most of it and they sure don’t know how to put into the types of firm rules that a computer would use on its own. That is mainly because it isn’t based on firm rules. It is based on human-type fuzzy logic and computers are terrible at it.
Trouble is, humans are very poor instinctive judges of how tough various tasks are. Things that seem simple like walking across a room turn out to be incredibly complex, things that seem complex like solving cube roots turn out to be pretty simple. While the very best human players are still pretty much equal to the very best computer players, other games like checkers are solved. Meaning that a checkers-playing computer program given the first move can always win, no matter what you do.
But of course, the real advantage to using computers is to get them to do the complex-seeming but actually simple stuff, while using humans to do the simple-seeming but actually complex stuff.
I agree with posters who are not all that hot about AI the way it is right now. When AI started out the ultimate goal was to learn how human brains work - that has not been achieved. What we got is a bunch of really clever statistical tricks that solve a very small subset of problems better then humans do and fail miserably at the rest of them. Example - any decent human “Go” player will easily beat any computer program because unlike chess or checkers Go is trully all about pattern recognition and not recursive searching.
What I am wondering about is probably best described as “using computers to promote creative design” rather than AI then. What you are describing as a looping program is what I’m interested in. With the Stirling engine over 100,000 various designs were tested virtually and the best one was picked. That is what I’m interested in, using computers to find creative soultions to things like engineering problems by examining billions of possibilities.
The AI field is divided into two approaches: weak, where software is trying to emulate intelligence, and strong, where scientists are trying to create truly intelligent (and the definition of that is as hard to pin down as the techniques themselves) systems. My company’s product is a commercial “weak” AI system that does a pretty good job of emulating natural language conversation. With enough content and a fairly narrow topic, one of our agents can be fairly convincing.
One of the major philosophical roadblocks to the idea of AI is that to be useful, a computer system should be deterministic; that is, for a given set of inputs, the same result should always be produced. Our system allows some variability in expression of the results (called a “result set” in our parlance), but a particular combination of states always causes the same result set to be selected. Human intelligence is not, as far as we can tell, deterministic. Indeed, a great deal of training goes into make sure people in particular disciplines react the same way to the same input, but that rarely works all the time. This suggests that strong AI is at odds with the utility of a computer system, and therefore something unlikely to ever replace traditional style systems. That’s not to say that “true” AI systems won’t have some other uses, but they’re not going to start replacing the type of systems we have now.
I know what you are referring to and I actually gave it some thought myself last week.
My idea was a virtual wind tunnel that designed planes on its own for different purposes. We have reasonably good simulation software for wind tunnels and I thought you could just give it a rough idea about what a plane would look like and let it run through millions of designs to find the best ones.
It may work for some things but there are problems.
A computer may be able to come up with a good aerodynamic design but how do we know that it could be a practical airplane. You would have to tell it that, no matter what, the front would have to be shaped so that a large enough windshield could be put on it and the interior design would have to accommodate people in lots of different ways from bathrooms to head space.
Even a simulator for this takes significant time even on a supercomputer. It may only be able to test say 100 designs a day and there are millions of possible configurations.
To be most efficient at these things, there would need to be some evolutionary logic and the computer could converge on the best design instead of just randomly testing things. We don’t have that right now.
With the stirling engine there was intelligent design.
“The first computer run began with 100 designs for a Stirling engine. The design elements of the engines were chosen entirely at random. The program then selected the two designs with the best cost-to-efficiency ratio. “Then,” says Gross cheerfully, “We let the top two have sex.” The next run of 100 designs included the previous two “parents,” a third design that included a 50-50 blend of traits from both parents, and 93 genetic mutations—new designs with a mix of parental traits and new traits. A standard desktop computer, using Idealab’s software to winnow out 100,000 combinations overnight, could reach conclusions in a few hours. Last year the team settled on a design that is half as efficient as the Philips Stirling engine but can be manufactured for roughly $100.”
Not to be disagreeable, as I agree that current AI doesn’t meet the expectations laid out in years past, but it seems to me that some goalposts are being moved. In other words, you seem to be expressing the common view “if we’ve implemented it, it’s no longer AI”. Basic search techniques (along with pruning, etc.) are the foundation of AI. Put even another way, “good old-fashioned AI” is still AI, even if it doesn’t (yet) meet human-like performance or works differently to some degree. Similarly, genetic algorithms, expert systems, theorem provers, etc. are all (extremely) successful AI techniques; I don’t see how that can be denied. I’m always struck by statements of AI failure; it seems to me that the failure lies more with the ill-defined expectations than with the AI itself.
But that’s incidental to Wesley Clark’s interests and is mostly a mini-rant on my part, for which I apologize. A more GQ suitable answer: seems to me that genetic algorithms, constraint satisfaction, and case-based reasoning are areas you want to look into. You also might want to follow some of the links from AAAI’s machine learning page or peruse their site map to focus on a particular topic (perhaps under the “Applications” section?). If it’s made it to AAAI’s website, it’s not really bleeding edge, but at least you’ll be guaranteed that it’s solid and viable research.
I read some snippets of Ray’s book, the part where he does some math regarding the rate of increase in our computational power and related it to the power of a human brain by calculating number of neurons and number of connections.
Given the fact that he completely left out the glial cells (more numerous in the brain than neurons, if I remember correctly), which neuro-scientists are now finding do indeed play a part in brain function/computation (they previously thought their role was much more localized and unimportant), and he completely left out the need to account for the chemical signaling throughout the brain (for example, neurons send other neurons chemical signals telling them to grow dendrites in their direction) and the complex protein production processes within the neurons (hundreds of proteins produce in a complex signaling mechanism during 1 firing of a neuron), all of this leads me to seriously question the thoroughness of his calculations.
Maybe his ideas are right, but 2040 seems very unrealistic.
Yeah. I think his overall idea is right but he seems like too much of an optimist. he was born in 1948 and really wants to be around to watch the singularity unfold so he is probably pushing dates ahead unintentionally. He feels by 2010 people will have access to augmented reality and functional nanotechnology will exist in the 2020s. Neither sounds very realistic to me, more like 2020 and 2040 for those innovations.
The truth is that nobody knows how humans think about things and solve problems. Given the number of neurons in a human brain and their rate of fire, it is entirely possible that the human brain uses massive brute force techniques to solve problems.
I went to graduate school in behavioral neuroscience and I work on large school business intelligence systems now but in pure IT that isn’t related now. I will concede the first sentence and will be the first to promote the idea among the general public. Too many people assume we have a much better grasp of how the brain works than we really do. That science is still very primitive as an overall understanding. People also tend to think that computers roughly similar to a human brain and that cannot be farther from the case. They are fundamentally extremely different and are now only as good as their programmers can be at minute detail. It can be very frustrating for me to have to tell a system that costs millions of dollars that its shouldn’t fail because line 256788 contained a comma rather than the semicolon it was expecting even although everything else on a massive file was correct. I may be able to fix that specific problem with some programing logic and I often do but it is akin to a lawn mower that will stop dead if it passes over one blade of grass that falls outside of what it was designed to expect.
I suppose it is possible that the brain is computing all alternatives subconsciencly. I have no way of disproving that and neither does anyone else. From what I no, though, the brain is very analog. I don’t even think that is a good term. It is just different in a way that we don’t understand. Computers simulate that poorly at this point and use very different strategies when we try.
We are still a very long way from a machine which can think as well as a human on the full range of subjects humans think about. So if that’s your definition of AI, then yes, AI has (probably) been a miserable failure. However, while we haven’t gotten that yet, we have gotten computers to do a heck of a lot of useful things. Call it what you like, but computers have enabled us to do many things which would otherwise have been prohibitively difficult or even impossible, and one of the things they’ve enabled us to do is to design progressively better computers, which to some degree reinforces the trend of advancement.
I wouldn’t put it quite like that. Walking across a room, or recognizing a face, is a very easy task… For a human. And taking roots of high-order polynomials is easy… For a computer. But many of the “easy” calculations which computers nowadays routinely do would have been impossible without a computer: Either you did without an answer, or if you absolutely needed an answer, you had a team of mathematicians figure out what approximations could be made, and then handed the approximated problem over to roomfuls of low-paid grad students or women to grind through weeks of calculation by paper and pencil. I think I agree with Lemur866’s conclusion, though, that this collaberative effort between humans and machines can accomplish more than either separately.
Shagnasty, while I wouldn’t disagree that much of what you say is correct, you seem to be forgetting the substantial progress made with neural networks. Unlike algorithmic programming (where you get comma vs semicolon type problems), neural networks have been used successfully to solve problems where the input information is not always complete, and it avoids the problem of having to anticipate in the program every path that is possible.
As for computing all possibilities subconsciously: I think that can be ruled out with math, there just isn’t enough time for our brain to do that for complex problems.
I think you are not giving enough credit to those areas where there has been much success, speech recognition, face recognition, spotting patterns in financial markets, etc. Neural networks are clearly far simpler than what is going on in the brain, but they are understood by mathemeticians (to a degree), and they perform orders of magnitude better than traditional programming.
There’s really no way to know this for sure either. It is a property of exponentially increasing quantities that most of the change takes place at the 11th hour. Computing power seems to be increasing in such a fashion. And break-throughs are a surprise almost by definition. So AI could be just around the corner.