Re: Neural nets - I used to write NN software, for optical pattern recognition tasks - it’s been a while but I still have some familiarity with them. The sort I wrote were called N-layer perceptrons, that basically describing how the neurons were hooked up.
There are quite a few different NN architectures that are suited to different tasks. NN’s are generally geared towards pattern recongition. This may mean things like OCR, but it can also include a lot of other things that might not intuitively seem like pattern recognition. They do well at this where traditional procedural programming is weak.
However, there’s no magic in neural nets. There’s a popular misconception that if you just build a NN with 57 zillion neurons and turn it on, you get an intelligent computer. But it ain’t so.
In almost all the tasks NN’s are suited for, the real “art” is in distilling the problem down into something the NN can deal with. For instance, for OCR programs there’s a lot of work put into pre-processing to properly segment characters and normalize sizes and so on - only after all that do you feed the data to the NN. The hard part is generally the stuff you need to do to the data before letting the NN chew on it. And interestingly, that’s also the primary source of recognition errors in most NN based systems.
But even though there’s no magic here, there are some really facinating similarities between how NN’s work and how human memory works. In NN’s, adding new info tends to damage old information unless you go back and re-train the net on the old stuff. Similarly with humans; disused memories tend to fade unless you go back and think about them again. There’s a lot of stuff that’s similar on a very low level.
Just found this thread… Cool. One of my fave subjects.
My thoughts on the question. I don’t think it is impossible that we might one day see thinking machines… though I do think it is improbable. I’m pretty sure that computers that think will not be based on the technologies currently at our disposal. I think that artificail neural networks (ANNs) show the most promise, but not in the way they are being developed for applications today (more on that later). I believe that any thinking machine must have the ability to learn, i.e. it can’t be “programmed” to think. One of the key learning abilities is reconfigurability of the net. Not merely the tuning of the weights in todays ANNs, but something akin to brain plasticity in living organisms. Think of an ANN where individual neurons can change what layers they are effectively at by changing their connections to other neurons.
You didn’t ask this question, but I’ll answer it anyway: I don’t think that artificial intelligences will ever experience consciousness.
The “Emperor’s New Mind” was good, but “Shadows of the Mind” was better. Penrose goes on to reinforce many of his premises and highlights more recent studies that support his theories, including the theory that consciouness is related to quantum mechanics in micro tubules. I’m not saying that I fully believe the theory, but I’m also not discrediting it. I’m not sure what credible neurologist believe (as bantmof alluded to), but it seems to be the best theory going for the moment and there’s certainly some evidence to support Penrose’s theories.
surgoshan wrote:
Of course, there’s considerable effort being applied to deveolp bio-chips (not without some successes), so be careful with your assumptions that computers and artificial intelligences will always be built on silicon.
This is only partially true. First, ANNs are grossly incomplete architecturally when compared to biological neural nets (BNNs). Plus most ANNs require back error propagation mechanisms in order to learn, where BNNs apparently don’t (at least, we haven’t discovered the back propagation mechanisms yet). Also, there is the problem of complexity. The most sophisticated ANNs that we can theoretically build today are significantly below the raw neuron count of your average housefly, and we don’t have a clue how to put that many artificial neurons together in a meaningful way. Human neuronal complexity is more than a million times greater, so we’ve got a long way to go on the technology learning curve.
pmh wrote:
That all depends on your definitions. I believe the two are quite separate. In other words, I’m quite willing to attribute intelligence to something without requiring that it be sentient.
Perhaps you have it backwards. Perhaps intelligence must come first, and from that sentience can evolve. This makes more sense to me from an evolutionary standpoint.
On the subject of human errors in the decision process. I think there are quite a few root causes. A few have already been mentioned. Certainly there may be random inconsistencies in the BNN, parity errors, if you will. These probably usually manifest themselves in the form of pattern matching errors that trigger inappropriate response mechanisms. Also, there’s the phenomenon of selective reasoning. We expect things to happen because we would like them to happen and we make poor judgements based on these expectations… for instance, an intelligent machine would never play the lottery because it would deduce that winning is unlikely - clearly humans don’t suffer from these kinds of logical constraints. BTW, I do not believe that an intellligent machine has to make poor judgements to be intelligent. I don’t believe human errors are related to intelligence, per se. However I do believe that human errors are key to our ability to invent new things, and not just errors in deed, but errors in thought, as well. What we often call “out of the box” thinking is, I think, a product of misassociation (weak pattern matching). Again, I don’t think this is necessarily requisite for an intelligent machine.
finally, bantmof writes:
Agreed, but I also want to point out something else. ANNs share similar qualities of redundancy as BNNs. i.e. They have a sort of fault tolerance built in that allows them to still perform at surprising levels in spite of damaged neurons or connections.
One more thing that’s been bugging me lately. I’m sure it’s required for consciousness, but I believe it’s also required for intelligence. The phenomenon I call “recall visualization”. When we remember something, we rebuild a visual representation of it from our memories. This visualization can then be externalized - for instance by drawing a picture from memory or walking through a dark room and not bumping into furniture. Where or how this visualization is rendered in the mind is not clear to me, but I think it’s safe to say that while ANNs can pattern match and correctly identify pictures, there’s no way to get that picture back out of the ANN. This is another reason why I think we are still lacking something fundamental for the building of intelligent machines.
There’s a correlary to “recall visualization”, but this post has already gotten too long, so I’ll save it for later.
I agree but if you want to make an AI that “seems” human it needs to have some capacity for making errors. If you want to build an AI that is simply intelligent then, of course , there is no need for making errors.
I believe that the original question was related more to the former than the latter (although my BI could be mistaken).
Spoons, I agree that random numbers are not necessary for human like intelligence. I was pointing out how painfully limited a computer is without human input of any kind. whether humans make entirely random choices is a matter decided by a Determinism VS. Free will debate. If we assume determinism, yes, your right, humans are just very complex number generators that can be predictable given enough information. If humans have freewill, I don’t think anyone here has enough information to decide if it is even possible to make a computer have a truly random thought.
A question of everyone:
What test would you use to PROVE a computer can think like a human?
I don’t think theres any way to PROVE that a computer is thinking like a human. The Turing Test comes the closest that I know of to a potentially conclusive test.
I personally would accept a simpler (more limited) set of tests though. Some of your basic psychologists tests ought to work, like showing a series of pictures and asking the computer what happened; asking whats next in a sequence; or describing ink blots.
These would all excersize the computers perceptual abilities, abstract pattern recognition, analogy making, and creativity.
What if a psycologist interviewed a computer and gave it all kinds of tests…and the computer was determined legally insane. Would that qualify for artificial intelligence?
I’d be reasonably satisfied if a test was set up whereby I interacted with several ‘entities,’ some AI, some human, and after lengthy interactions I still couldn’t tell which were human and which were AI.
I say this, because as we all know, the true goal of AI is to be able to create games where computer controlled entities act so much like humans that we forget that they’re not.
You probably all know there’s been a lot of effort put in on computer chess. (The rules and goals are clearly definable mathematically, and reasonably easy to define.)
However the computer certainly doesn’t analyse the same way as a human. Chessplayer’s thought processes are little understood, but probably involve a lot of pattern recognition. The computer just analyses ahead, considering every possible move, and relying on simple material counting for each end position to decide which is the best move.
There are also 3, 4 and 5 man databases, which were derived ‘backwards’, i.e. start from all winning positions, retract a move and store all possibilities, retract again - well you get the idea. Summarise into a giant database and the computer plays these endings perfectly - but with ‘zero’ understanding!
So I guess my point is that you can practically apply Turing’s test to a game of chess already - but the machine is NOT thinking like a man.
Well, determining if a computer is really “understanding” the problem it’s working on is well nigh impossible. A case in point is Searle’s Chinese Room. For those of you who don’t know, the idea is this:
Searle (a philosopher) is in a room with a big book of instructions. A slip of paper is put through the mail slot, with chinese charaters written on it. Now Searle (who doesn’t understand a word of chinese) takes the paper, follows some instructions in the book, writes some symbols on a new sheet of paper, and he sticks the new paper in the mail slot.
Searles contention is that the instructions he was following could be detailed enough that it would appear to the person outside the box that there was an intelligent, understanding entity inside, but that in reality, there was no intelligence in there, cause he was just following instructions and never understood the characters.
My opinion (not original) is that while Searle may not have any understanding, the room as a whole (Searle + the rule book) probably did. From outside of the room, it appears in every way that a human intelligence is operating in there, so why shouldn’t that be considered actual intelligence. (In other words, it passes the turing test, so thats good enough for me)
So I suppose what I’m getting at is that even though there’s no one part of the system that is thinking like a human, as long as it acts human, we should probably call it intelligent.
(Note that neither the room, nor the chess program in the previous problem can learn, which makes them act in very unintelligent ways. But I don’t think that affects my point)
Hunsecker,
Thanks for the the Chinese room example. To me, although Searle does not have ‘understanding’, the rule book was written by someone who did understand things. But that system certainly can’t ‘learn’ (a good point by you).
With chess, the 5 man database is effectively all the subject knowledge neatly categorised. Once they reach the 32 man database, chess ends as a contest. (or as a worthwhile game even earlier, but you see my point).
However chess-playing programs which analyse can be programmed to record their games and avoid identical losses - so technically they are ‘learning’.
Chess playing computers do not consider every possible move. With 10^120 states in the game-tree, this would be impractical. Chess playing computers like Deep Blue use neural networks and other artificial intelligence heuristics to recognize patterns, prune the game-tree, recognize opponent’s strategies, choose their own strategies, etc… While this is probably a far cry from the human approach, I don’t think it is fair to characterize it as simplistically as you did.
hunsecker, you wrote:
I think you missed Searle’s point. The true test of intelligence is not how a well understood system is exercised, but in how the intelligence can be used to arrive at a solution outside the defined boundaries of the current rule set. Think of this example, let’s say that the inputs in Searle’s room are spoken language rather than symbols. Searle has been trained to recognize the phonemes and translate these into symbols that he can then manipulate to arrive at an answer. In other words, the speech-to-text rules are a part of the system. However, Searle still does not speak Chinese. Now let’s suppose that a Chinese speaking individual arrives from Taiwan and he has a different dialect. Without the understanding of the language, the Searle and the system will have a high probability of failure. Now reconsider the exact same scenario, except where the verbal communication is in two dialects of English. The system has a higher probability of success because Searle can bring in other rule sets and understanding to solve the initial translation problem.
The other point that Searle makes is that if the experiment were conducted in English, he would likely be able to make observations, predictions, and gain some knowledge of the mechanics of what he was doing, (i.e. to understand) and be able to extend his rule set autonomously to become more efficient or perhaps more accurate.
I think you’re absolutely right that learning and adaptability are key features of intelligence. My point was that when you’re just talking about a computer “understanding” a problem domain, from the outside you can’t tell if you’re dealing with an expert system (the chinese rule book), or a more sophisticated system which can manipulate abstract symbols and recognize patterns.
I was trying to use Searle’s example as a way of talking about understanding, not intelligence in general. I admit that I haven’t actually read his work, I’ve only read about it 2nd hand from IIRC Hofstadter, Penrose, and a philosophy professor of mine (a strong AI proponent). So if I misrepresented what he was trying to say, I apologize.
You do bring up a good point, that for the translation room to be considered intelligent, it must be able to recognize patterns and make analogies (like the similarities between two dialects), and be able to modify its own rule set. I think that these things are what AI research ought to be focusing on. We have all kinds of expert systems, but no one considers them intelligent, and it seems to me neural nets are very effective for certain tasks, but are awfully limited in their scope.
JoeyBlades,
Yes sorry, I meant to say that computers look at every possible move in each position for a limited number of moves ahead (I think they’re up to about 8 for a middlegame position).
My knowledge of the programming is over 10 years old (when I helped out at a World Micro Chess Championship), so I’m interested in your updated info that modern programs use ‘neural networks and other artificial intelligence heuristics to recognize patterns, prune the game-tree, recognize opponent’s strategies, choose their own strategies’.
I thought there were terrible difficulties in pruning the tree and recognising any sort of strategy, and that most of the progress was in the hardware (having a processor for each square etc).
Just wanted to point out that, while a computer’s ability to play chess has progressed extremely well, they are fairly poor players of other games, such as checkers and go–of course, this is simply because chess is the game that programmers have focused on.
My point, though, is that, while chess playing algorithms may have some ability to “learn”, it is extremely limited to specific cases. A person is able to take strategies learned from playing chess and adapt/apply those strategies to other applications–a computer can’t (or if it can, only very primitively). In order for a computer to be considered intelligent, or to have thought, it must be able to take strategies/algorithms from one area and adapt them to widely varying areas; I think we’re a long way from this happening to any significant degree.
It’ll be a while before computers take on Go players, but the world checkers champion is a computer. The branching factor is smaller in checkers than in chess, so it’s a pretty easy problem for computers to deal with.
Go is a much more interesting one, IMHO. Checkers, you can brute-force, as you can to a more limited extent with chess (you need some smarts about how to prune the decision tree, but at heart it’s still a brute-force strategy). But Go is another issue entirely.
You’re right, the focus (at least for Deep Blue) has been in raw computing power. I think Deep Blue has 256 PowerPC processors operating in parallel. However, I saw somewhere (I think it was the Deep Blue FAQ) that the programmers of Deep Blue claim that they don’t use “artificial intelligence”, but they do use “fuzzy logic” to access the material value of potential positions and they use databases of historical games to try and recognize familiar board positions and nudge the game toward winning solutions based on this captured knowledgebase. Call it what you want, but this is technically an expert system.
The makers of Deep Blue have a bias, however. They sell hardware. It is in their best interest to push the computer down this highly parallel, brute force path because they are aiming at applications that will ultimately benefit from this kind of raw computing power (applications like gene sequencing, financial investments, and air line scheduling). However, I’ve heard of another chess playing computer that was using a neural network to learn and improve it’s game. It’s not mainstream, yet. The one I heard about was being developed by Gerry Tesauro from the IBM Research Group, but I’m sure there are others…
Oh, and by the way. The irony is not lost on me that both Deep Blue and this neural network based chess computer are being produced by IBM… and yet these ‘islands’ in the ‘big blue’ don’t seem to be communicating…
JoeyBlades,
thanks for the info.
I think the Deep Blue team are rather secretive. I know that after Kasparov’s second match with Deep Blue that he was suspicious there had been some human intervention. As a strong player (not Kasparov’s standard, obviously!), I was very surprised by one move myself. It was the sort of move a human would play on general (not calculated) grounds.
But, as you say, they’re trying to sell computers…