Artificial Intelligence question.

The learning algorithm used in the Emergent simulator is not back propagation, it’s Leabra, a combination of error driven and hebbian learning. It’s based on the principles of Long Term Potentiation/Depression and is biologically plausible. There are several approaches to cognitive modeling; one tries to discover the general principles that guide learning, and one works to recreate the brain from the bottom up. Computational Cognitive Neuroscience attempts the former, IBM attempts the latter. These two approaches will meet somewhere in the middle.

Characterizing all neural network models as simple linear algebra transformations (SVD/PCA) is disingenuous. Henry Markram of the Blue Brain project has simulated 15 different types of neurons on Blue Gene/L and hopes to advance the resolution of his simulation to that of molecules so that he might better study gene expression and protein synthesis in neural models. The electrical properties of his models to date are almost identical to those found in brains and are on the scale of a rat brain; there is every reason to believe that he is on the right track.

Another recent advance in biologically plausible simulations of learning in humans is the PVLV system for reinforcement learning, which is an extremely detailed exploration of dopaminergic learning in the brain that follows the anatomy as precisely as possible. Unlike your claim that the more like the brain neural network models are, the worse they perform, leading researchers have found the opposite and are ramping up their efforts. Stanford’s NeuroGrid is an indication of this; while simulated neurons face the formidable challenge of a geometric increase in the communication costs of sender and receiver-based models, emulated neurons that are developed in silico are able to sidestep that challenge. Considering that they are based on the same electrical properties of the brain, I see little reason to suspect a priori that they will encounter insurmountable roadblocks. IBM’s molecular models of neural networks will discover the pieces of the low-level biology that are critical to learning, Computational Cognitive Neuroscience will discover the high level learning mechanisms, and then we can put them on chips and provide plausible constraints and guidance to evolutionary algorithms, letting that process once again figure out the details. Only this time we’ll b watching. I have yet to see evidence that a) this is not the strategy being taken and b) there are any substantial reasons to believe it will not work.

I was hoping that I’d get to add to the discussion.

That explains part of why your objection isn’t quite right. Not all ANNs use back-prop. For instance, your machine learning lecturer has a section on Hopfield networks, which IIRC implement a feedforward “energy minimization” scheme. IIRC, Rumelhart did a lot of work here if you want a name to look up for more information.

In addition, wasn’t there some work – probably by Grossberg at Boston University – that demonstrated that human memory was accurately described by adaptive resonance theory? I don’t recall just how solid the connection (hah!) was, hence the phrase “accurately described” rather than something stronger.

To be clear, I’m only saying you’re wrong because of the absoluteness of your unqualified statement (“nothing like real brains”). One research area that I’ve occasionally come across is the attempt to add neuron spiking, hormone modeling, and other biological features to ANNs. AFAIK, none have yet paid dividends, at least partially due to your point about the serial nature of processing. Of course, even if (when?) they do, at some level they’ll still be “nothing” like “real” brains; that’s an argument that I’ll try to avoid as a waste of time.

ISTM that the link is supposed to show that people are working on improving computational performance via emulating brain structure. Of course, it doesn’t do all that much to undercut (what I perceive as the intent of) your statement about ANNs, which concerns silicon intelligence.

On a related note, I am rather curious about the progress made in using FPGAs in ANNs (cited in that link). I was underwhelmed by FPGAs in the cursory exploration I made into them (different research area, as a technique for fault-tolerance). See, ISTM that one of the major problems – if not THE major problem – with ANNs is that they generally have a fully pre-determined structure. Anyone who has worked with them knows the “black art” that is determining an appropriate number of neurons, layers, connections, etc. Although I’ve seen a couple techniques for manipulating the structure (like “neural gas” or pruning connections), nothing comes close to approaching even a small bit of the plasticity and dynamic nature of a brain.

And all this talk of AI, ANNs, and computability ignores arguments like those put forth by Fodor in The Mind Doesn’t Work That Way. (Not that I recommend the book, as I found he leaves large swaths of ground ill-explained, at least partially due to his terseness; it is only 100 pages after all. A very dense 100 pages that took me a long time to get through; and I’m still unsure that I actually grok it.) It’s been a couple years, but I think his argument was that even if the brain uses “modules” (as the “computational modularity of mind” theory posits), it fails due to either the case of computational intractibility or that of global vs. local context. Either way, from my (probably muddled) understanding, the issue is with the fact that Turing computation is purely symbol processing – in either case, the theory fails on maintaining the associations related to a given symbol. (This came up in a thread on Pinker a while back…I’ll see if I can dig it up.)

Cool stuff, though. All of it.

A link to the Pinker thread that I referenced before (from April 2006): The Blank Slate. I don’t recall if the discussion would be all that relevant to this thread or not, as it was specifically centered around the nature vs. nurture debate.

alterego – you wouldn’t happen to work with (or know) Mozer, would you? I applied to Boulder for grad school (years ago now) based on his NN work. Got rejected, though, so it wasn’t to be. :frowning:

At any rate (and as a total hijack, my apologies), it seems that you’re pretty conversant with some state of the art research concerning NNs. Glad to see some more concrete references corresponding to my vague allusions. As someone familiar with that area, might you be able to suggest some need-to-read references? Especially concerning adaptive NNs, if you know of any…that’s a particular interest of mine.

You touch a nerve, there. I do OK on IQ tests, for what that’s worth, but I’m often stumped when my wife asks “what did I just tell you?”

I’m not as well read on the subject as all you folks are, but my sense of things is that we’ll have AI when a computer can regularly screw up, see the error, and decide it likes the wrong answer better than the one it was looking for.

As interesting as it is, I’ll ignore the fact that the implementation of 20q is a NN, as it brings to mind an amusing anecdote:

My wife and I were driving from IN to NJ to visit my parents. She decided we’d play 20 questions to pass the time, which I wasn’t all that enthralled with, but hey – 12+ hours of driving will lead to all sorts of diversions. After one round of her asking questions (she got it in about 15, IIRC), it’s my turn to ask, and it went like this:
[ul]
[li]Me: Animal, vegetable, or mineral?[/li][li]Wife: Vegetable.[/li][li](A really long pause as I think…)[/li][li]Me: Is it a Christmas tree?[/li][li]Wife: :eek: How did you do that?[/li][/ul]
Got it in one (she considers the “animal, vegetable, mineral” question a freebie). Needless to say, we haven’t played since. :smiley:

Just for fun, I played 20Q thinking of a Christmas tree. Most of the questions were pretty normal, but it did ask me “Can you control it?” and “Can you love it?”.

There is a continuity, no doubt about that. But when we’re talking about intelligence in the sense of consciousness, there is a difference. The difference might be from a richer interconnect structure, or some mutation that allows the brain to think about thoughts, or something else. But just piling on more neurons is unlikely to do it - just like adding more processing power won’t make a computer intelligent.

Piling more bags of flour onto a pile does not make a cake. Neurons are obviously necessary for intelligence, but not sufficient by themselves - unless you think there is some threshold of neuron count where intelligence (and I mean consciousness) begins.

I think I said above that my opinion is that the first AI will be from running a human brain simulator, based on some person - so your links are interesting, and I’ll check them out if I have time.

It depends on what level of primitives you’re talking about. If you consider the neuron to be a function unit, like an adder, then there is limited parallelism because there are a limited number of functional units that can be kept busy (but we’re not using all our brains all the time either.) If a neuron is a gate or flip-flop, then a computer is just as parallel. In fact one of the big problems today is figuring out how to make these things be a little less active in order to save power and reduce heat.

Are people perhaps confusing intelligence with learning? Most non-conscious animals learn, and neural networks certainly do. But when does more learning potential flip over into intelligence. (And the I in AI means our kind of intelligence, not the admittedly high level my dog has.)

When did you last look at FPGAs? They’re getting more powerful and faster all the time with more types of functional units. I haven’t used them much (I did do some testing work a long time ago) but unless you are dealing in really high volumes, they are the way to go. They are much faster than a software solution, almost as flexible, and a hell of a lot cheaper than ASICs, thanks to high mask costs among other things.

I feel like you have conflated intelligence and consciousness, especially if you are referring to P-consciousness (phenomenal consciousness, such what it is like to experience redness). Whether science can explain P-consciousness is a completely different issue in my mind. One that I am willing to talk about, but also one that I have not touched on in this thread. I don’t consider your argument about adding more neurons to be serious, although I did respond to it in another post. Like I said there, the IBM approach of adding more neurons and the CCN approach of using as few neurons as possible will likely meet somewhere in the middle. Nobody believes that cooking up the largest possible bowl of neuron soup will lead to something that we would recognize as “smart like us.”

WRT the second quote, Google Scholar for sub-threshold activity. Neurons do lots more besides fire. They are always doing something.

Please give me an example of a non-conscious animal.

It was at least three years ago and I feel the need to reiterate that, in addition to now being out of date, it was a very cursory look. While all the good things you say about them are right, my issue (as it relates to the plasticity and adaptibility of the brain) was with the fault tolerance. According to my understanding, there needed to be a circuit schematic stored on the chip in case something went faulty. At best, if I remember it correctly, it was a duplicate FPGA implementation (as found in Reconfigurable Architecture for Autonomous Self-Repair, Mitra, et al, IEEE Design and Test of Computers, 2004). An amazing bit of hardware, that, but it automatically results in more than double the hardware (being a duplication; maybe compression would lessen that). And it relies on what they refer to as the precompiled-reconfiguration technique (which is just what it says, requiring even more hardware to store at least part of the circuit).

Furthermore, what got me about FPGAs is/was the same complaint that I have with ANNs and brain modeling – what I saw was very much non-dynamic, at least once it was designed and put into operation. As impressive as the work is – and don’t get me wrong, it’s very impressive – it’s also very much unlike the brain.

If you recall, I’m a software, not hardware, guy, so I’m perfectly willing to accept that either I was (and remain) ignorant or that a lot of progress has been made. If that’s the case, I’ll ask you the same thing I asked alterego – are there any need-to-read references you might suggest?

Ah, but you’ve left out an important detail: how many questions did it take to get the answer?

What I thought was very cool about the 20q site was that whoever runs it uses a neural net as the backend. If you’ve seen the hand-held, battery operated version (used to be available at Sharper Image), they took the NN from the website, did some analysis on it to prune it down to the most asked items (IIRC, about 20K answers are stored), then put that on a chip.

Nifty stuff; I hope I didn’t ruin anyone’s marvel by (sort of) exposing its operation.

Yeah, you’re right, I’d misremembered what he had said. Here’s his reply:

:smack:

The memory can be simulated in the set of rules for transcription given to the guy in the room. Such a set of instructions can be written in a way which formally simulates the reading and writing of information to a memory.

It just has to include instructions like “If the first question asked of you was…” or “If the second to last question asked to you was…” Of course in an actual Chinese Room these time-referencing questions would be incredibly more complicated, but nobody said the CR was supposed to be realistic.

Searle’s point was just that just because the guy in the chinese room can spit out intelligently coherent responses, that does not in itself mean he has understanding regarding what he is doing. Searle argued that in this case, it’s clear that the guy doesn’t understand what he’s doing, and so something other than the ability to spit out coherent and intelligent statements is required for understanding. Human brains have that something else. The guy in the CR does not. Searle is saying if we want to do science about understanding, we’re going to need to look for that something else.

But of course that’s not what Searle asserted.

Again, Searle does not say otherwise.

-FrL-

I have always assumed the I in AI was a general term that applied to all of the types of intelligence found in animals (and anything that we can mathematically create). Also, I have always assumed that what separates humans from other animals is the extent of our abilities in certain areas. It’s also possible there are mental processes in animals that are far more advanced than our own.

Is there an accepted definition of intelligence w.r.t. AI?

If there is, I’d sure like to know what it is…

The best one I’ve heard is Intelligence is whatever computers can’t do yet. (In case it’s not clear, I’m only semi-serious.)

It was under 20, but not by much.

Are you sure? It sometimes seems like a large part of the anti-AI crowd uses that as their operational definition.

This argument is interesting because it’s equally seen to support both sides of the issue. (That is, it can be taken to support or refute the notion that humans will eventually create Strong AI.) I was interested in it for a while, too, but it really doesn’t seem to add anything to the debate.