“Impossible to program a computer to operate within a non-distinct parameter” – that’s just nonsense. Think of fuzzy sets (Fuzzy Logic). Think of analogies.
If you think of the bird nest building as emergent behavior in which repetition of simple actions produces an apparently complex result, not only is it quite possible to conceive of a machine algorithm for replicating the nest (you have to assume a very competent perceptual apparatus), but it also reduces the bird’s apparently intelligent behavior to something that is more script-like and mechanical.
As for the “Chinese Room” argument, it doesn’t really keep AI researchers up at night thinking “Alas, I’ve wasted my life.” There are many, many weaknesses to Searle’s argument, one of which is whether the mechanical procedure that he postulates is even remotely possible or could in any way produce a translation that would convince a native speaker. Same thing with most of the other prominent objectors to AI (c.f. Penrose).
As for whether brute force problem solving is equivalent to intelligence – well, yes and no. Yes, a Grand Master in chess doesn’t think as far ahead as the computer, but this may just be replacing brute force with pattern-matching against a set of previously played games. Which is more intelligent – having to spend ten years doing nothing but playing chess in order to have a big enough set of previously played games to pattern match against, or to know the rules of the game and be fast enough to generate the right moves on the fly?
The basic problem is that in real life, almost all interesting problems are too large or have too many unknown variables to be solved using brute force. So humans use what has been termed “satisficing” problem solving – they try to get an answer that is “good enough” – one that produces an acceptable result in an acceptable amount of time. There are a lot of interesting issues involved in this kind of problem solving – do you think for a long time and then come up with an answer or do you generate a possible solution and then refine it?
The point to this longish digression is that once you get into satisficing behavior, you have to start thinking on the meta-level, that is: “How good a solution do I need?”, “How long am I willing to think about this issue?”, “If I guess wrong, what are the consequences?”. You can see how this kind of meta-level processing would give rise to something that could conceivably be termed conscious behavior because you have to “think about the thinking that you are doing”.
I’m sure that the a computer can be built to “solve chess” as you say without requireing all the matter in the universe. If it can crunch numbers for 10 moves out, why not 50 or 100 (I know it increases exponentially). There is probably a practical limit on how many moves Big Blue should calculate into the future. It’s all based on probability and after x number of moves there is probably diminishing returns on the accuracy.
IIRC, this was also Big Blues failing. When whatshisface defeated it, didn’t he start throwing in random irrational moves that threw off the computer?
Computers are great at crunching numbers and recognizing patterns. They suck at creative thought.
Well, unless you’re using some metaphor in there. What if I mean that the bleak (colorless) ideas common to the ‘green’ movement (green ideas) are not currently in the public eye, but are set to break out into the public consciousness soon (sleep furiously)? One of the problems with computer understanding of language is that you can take an apparently nonsensical sentence like ‘colorless green ideas sleep furiously’ and actually mean something with it. IMO, we won’t have good machine translation until someone comes up with something that really qualifies as artificial intelligence.
I mean, just figuring out what a word means in context is not exactly easy. While I’m sure anyone here can understand what is being said in the following paragraph, it’s going to throw off anything that tries to use simple contextual tricks since ‘bow’ gets used different ways even in a single sentence.
“Take a bow,” said the stage manager of the Titanic II’s dinner theater, as the actor playing Robin Hood picked up his bow and Maid Marian adjusted the bow in her hair. Meanwhile, Igor Violinski readied his bow for “Flight of the Bumblebee,” as Aunt Ethel tied a neat bow on the last of her Christmas parcels, and faithful Fido at her feet remarked “Bow wow.” Alas, dear reader, none dreamed of the iceberg looming ominously off the starboard bow…
““Theoretically, given a fast enough computer you could ‘solve’ chess, that is compute all of the possible moves…””"
And like a human; that computer wouldn’t need much data at all once the game was solved. It’s possible that when someone opens the board by moving their pawn out two spaces in from of their king; they’ve automatically lost. There only needs to be one memory for this move.
Once a computer calculates chess to the last ‘draw point’ (after having solved the whole game) ; it can dump the rest of the data.
Let’s say that there are no possible combinations in 10 rounds which someone could ‘win’ - meaning that either the computer wins (if the human opponent makes a certain move to lose in thise first ten rounds); or the computer forces a draw. All the data can be stored in a ratio, to be ‘unzipped’ and not take much coding or processing on the computers’ part.
Hmm… On the general topic. I genuinely feel pain for the insignificant life that has solved coding for the internal sense of self; who actually applies that knowledge towards the creation of a computer that possesess an internal sense of self. Hit them over the head with a stick and tell them they just solved omni-scient AI.
I personally find it incomprehensible that someone would have the mental acuity to build such a machine without realizing the real impact of the solution. Coding for human intelligence is framed on limitation. The reverse process, makes for a machine not aware of itself yet capable of answering any question on demand. Umm… you do the math; have sex to create kids or slave over a ‘computer kid’? I vote for sex and omniscient AI!
Oh… and on that Chinese thing… like what’s up with that?!!
Obviously, a computer put to that test would undergo stages of human development from dumb-as-a-doorknob birth, to be parented by lab researches with the aid of applied biological evolutionary algorythms. The artificial form would have as much chance as the rest of us. Once you’ve tapped and isolated that process; the package or container is not particularly relevant to the mechanism of cognitive evolution.
[quote]
Without going into too much detail, a chess game is a specific sequence of moves out of the set of all the possible moves. There are approximately 35^100 legal options for a chess game…Computers don’t mind trundelling through wads and wads of mechanical calculations at all… but unfortunately they can’t: it takes way too much time. Assuming you can process…2 million moves per second, just like Deep Blue, the universe would end by the time you finished (i.e. 4.1477e+137 millenia away!)
—I’m sure that the a computer can be built to “solve chess” as you say without requireing all the matter in the universe.—
I’m not totally sure about this, but the number of “game states” that would have to be calculated might well exceed even the most optimistic estimate of how all the matter in the universe could be arranged into a machine that would calculate it. However, with the possible advent of Quantum computing, it just might now seem more realistic, though still rather uneccesary, given that such good algorithms can be created.
True enough. But that’s an extremely complex amount of abstraction.
There are two schools of thought on AI. One says that modeling the hardware of the brain (neurons) is the best approach (bottom-up), the other says modeling the information is the way to go (top-down).
You could, in principle, model every neuron in an animal brain, either in software or hardware or a combination. The problem then becomes how to program, or initialize, each neuron. We don’t yet have the technology to take a reading of more than a few dozen neurons of an actual living brain right, so it’s not an easy task to simply dump the contents of a brain into a simulation. The second problem is how to simulate all the sensory inputs to the simulated brain. You can see than this is a very nontrivial thing.
Approaching the problem from the other direction, you have the problem of how to represent information (sensations, ideas, memories, etc.).
There has been some progress in both of these approaches over the last 40 years or so, but the problem is far from solved.
I figured it was a big number but not that big. Of course the computer can also disregard a significant number of “dead end” or illogical moves but that’s not the point. I think we can safely say that brute force is not the road to AI.
I am trying to get my head around the practical value of AI as something other than a labratory curiousity. Do we really want manufacturing robots that get happy when they are productive or smart bombs that contemplate their role and purpose in the universe on their way towards their target?
I think having the universe undergo heat death would preclude finishing. No computer scientist would ever advocate brute force search as the only road to AI, but you can certainly combine the speed of computers with various forms of heuristic pruning to perform many kinds of search tasks.
As for the practical value of AI, I’m afraid that the limit here is your imagination rather than lack of usefulness. How much do you think it would be worth to a parent to have a computer/camera combination that could monitor their toddler playing in the yard while they did household tasks? A system that would ignore the cats chasing each other around the room, but could identify intruders? Think a farmer would like little automated bots that would wander through the fields picking weeds and squashing bugs but leaving the crops alone?
A bit nearer in the future, how about a SPAM filter that worked 100% – deleted all the junk email and never threw out stuff you wanted to look at?
(Note that we don’t have to get into Twilight Zone scenarios about AI entities displacing people from their jobs – there are a lot tasks that require some degree of intelligence but are just so low-level that no one wants to do them – oh, think sorting bottles at a recycling plant.)