Artificial Intelligence question.

Yeah, and that’s the thing. Now, while I was delightedly self-satisfied at the time (and to this day, much to my wife’s chagrin, make jokes about it), as an AI guy the incident has been vaguely humbling (upon reflection) and has acted as a shibboleth. I mean, honestly…practically no direct input, the entire inference based on context and prior experience of my understanding (intuition?) of my wife’s thinkng.

Yeah, it’s a handy definition to have around in a battle with the anti-AI crowd. But then, akin to above, I recognize just how much there is that computers can’t do. And I hate the hubris expressed a long time ago by Minsky, McCarthy, et al (or even Kurzweil these days), and force myself to swallow a bucket of salt when I use it. Hence, half-serious.

**This is fascinating!!
** I hadn’t heard about it. Tell me, Peter Morris, when you say “reproduce and mutate”, what does that entail in this instance? And, can you point me to somewhere I can read more about these experiments that’s available to the public? (at least 50% of the time, the stuff I really want to know is contained in trade journals I can’t access, particularly with respect to psychology and science.)

This website will teach you the workings of genetic algorithms through interactive java applets. It’s probably a better place to start than published literature if you just want to understand them. You might also enjoy The Blind Watchmaker applet.

Or these films.

So are flops. Even if the input is not changing, there is some internal activity. DRAMs have a lot. The question is: are the internal oscillations involved in the processing of information or control, or just a keep-alive mechanism?

I guess I’m misunderstanding what you mean by intelligence or consciousness. I have no doubt that we will be able to mimic planaria very soon.

My point has been that our lack of success in what I call AI has come from a lack of understanding, not a lack of processing power. If processing power is what you need, we could put the simulation on the net. It wouldn’t be real time, but I bet you would be able to get something going, if you let it run long enough and set up the interconnection network properly. I’ve seen speculations for decades, some up close. But I haven’t seen anything that looks like a true step to modeling intelligence - in the sense that we are looking for intelligent aliens, not alien slime molds.

I’m very familiar with that work. I go to Ed McCluskey’s seminars whenever I have a chance. It is indeed good stuff, but you shouldn’t take it to mean that real FPGAs suffer from the kind of problems Subhasish is solving. I see field failure data, and I don’t see FPGAs having high fail rates - I’d guess that there MTTF is in the billions of hours. The cool thing about FPGAs is that, since they are reconfigurable, during production test you can reconfigure them to be easily testable, so they have a big advantage over ASICs.

First of all, FPGAs are indeed nothing like the brain. So their use would be to migrate compute intensive parts of the software into hardware. Belle, the Bell Labs chess playing computer had a lot of stuff migrated into FPGAs.

As for documentation, the two major companies are Xilinx and Altera - just add .com to get to their websites. FPGAs these days are programmed by writing code in a hardware design language (Verilog or VHDL) and mapping it to the FPGA. It is a lot like software these days. A publisher sent me a review copy of a book on VHDL and FPGAs. It isn’t at all close to the things I review, so if you’re interested send me a private message and I’ll send it to you.

I’ve seen a few papers on FPGAs being dynamically reprogrammed during a process. I don’t quite see how that capability would be useful for you, since it is easier to change the interconnects and weights in software, but it is one capability they have that normal hardware doesn’t.

They may not be a good match for what you want to do, but if you want to spend the time to learn about them, and have a computation bottleneck, they might come in handy.

When I was taking AI, from Pat Winston, I certainly didn’t get the impression that they would be satisfied with systems at hamster level. Yes, they did do vision work, but that was related to their more advanced work. The problems they set 30 years ago, as I said, were about solving equations, finding directions, dealing with blocks, not about finding grain and mating. Maybe they’ve set their sights a bit lower since.

Given that nobody has come close to matching a hamsters intelligence yet, I think it’s a pretty reasonable goal to start there.

I make the assumption that hamsters and other animals have these abilities (otherwise I don’t think they would be able to survive in this competitive world):

  1. ability to abstract information
  2. ability to incorporate itself into a mental simulation of it’s environment
  3. ability to plan (no idea how far into the future, 1 second, 3 minutes, 1 week?)
  4. creatively solve problems

To me, these are all key parts of intelligence, and that is why I typically include animals when I think of intelligence.

My AI professor, not that I agreed with him on everything (his goal was to reformulate AI in terms of machine learning), said that he would recognize something as intelligent if it would do the dishes for him. Round them up, clean them off, dry them and put them in the cupboard and all that.

I owned hamsters for years, and I saw no sign on 1 and 4. 2 and 3 would have been very hard for me to detect, maybe there are some experiments showing this. I once closed one in a drawer in my kitchen - he never scratched at it or anything. (Happily I found him a few days later, and he was still okay.)

On the other hand my border collie can do at least 1, 3 and 4. I very much expect he can do 2 also.

Terry Winograds block world work from when I was in college could accomplish most of that virtually. We’re a long way away from the mechanics, but that isn’t AI.
But if your professor thinks this, I can see where you are coming from. The question is whether improvements in machine learning will lead to intelligence, or if it climbs a hill with a false maximum. It will be useful, like much other AI work, but I personally don’t think it will lead to intelligence. An intelligent computer will no doubt utilize machine learning, however.

I always felt that Searle’s Chinese Room argument was fundamentally flawed, because it seemed to me intuitively that the “book” or “cards” or whatever that are providing the answers to the questions would have to be so complex in order to convincingly simulate a conversation with a conscious being that there would be no way to argue that they couldn’t actually be conscious … like how it seems quite impossible that an individual neuron could be conscious, but a big network of 100 billion of them where each one is connected to thousands of others is so tremendously complex that it it no longer seems implausible for it to be conscious.

And sure enough, in the book Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI, the full text of which is available at that URL, Kurzweil addresses Searle’s arguments quite effectively, at least in my opinion. See Searle’s chapter and Kurzweil’s response. (I was really only interested in the Searle chapters and Kurzweil’s response, because most of the other authors are Intelligent Design folk, which is understandable since the book is published by the Discovery Institute. I wish I’d known Kurzweil had it available for free on his web site, so I wouldn’t have had to send money to the Discovery Institute to buy the book…)

Apologies for the length of this response, but I feel that it required a certain level of detail. Before getting to that, I realized I made an error in a previous post when I cited Rumelhart in relation to Hopfield nets. I was actually thinking of Stuart Kauffman and his light-bulb networks. The only reason I realized it was that Kauffman just happened to be mentioned on page 3 of the ABCNews article I read this morning: Orderly Universe: Evidence of God? (Paulos’ position is “no”, for those who don’t want to read it.)

I kinda figured you had more than passing experience with FPGAs, so in general, I’m simply going to defer to your statements and judgment. However, I’d like to be more explicit about what I said – in one sentence, my conception of FPGAs prior to actually looking into them were unrealistic. Put another way, I had developed an impression of magic (i.e., sufficiently advanced technology) that was beyond the actual advances made and was thus underwhelmed.

For instance, the MTTF you bring up (as impressive as it is) is much more an engineering feature that doesn’t address the theoretical science (of FPGAs in AI, implementing ANNs in particular), which is where my interests lie. And I don’t mean to start such a debate, as the dividing line is murky (at best), nor am I denigrating engineering. Far from it, in fact; as time goes by, I’m ever more impressed with engineering successes. But, as Patterson states (in his ROC paradigm), given an arbitrarily long timeframe, all hardware will fail. Accepting that truism, the theoretical question becomes: how are such failures handled?

As an engineer, I think you’d agree that duplicating a full complement of hardware is not the preferable path to follow (even though it may be the only feasible path for the desired task and thus unavoidable). Not only does additional hardware cost real money in material, but it also adds complexity, etc., that further augment cost (even if it’s only indirectly). Concerning failure recovery, I was just fairly surprised that the solutions I came across were of two types: (1) duplicate hardware or (2) pre-designed, static layouts (that require on-chip storage, meaning more hardware). I don’t know, perhaps this is rendered (practically) irrelevant by Moore’s law, but that wouldn’t change the theoretical point. Reiterating for clarity, my being underwhelmed by FPGAs is likely more my own overblown expectations than anything else.

But all that still doesn’t address my thoughts about using FPGAs and ANNs to model the brain. More specifically, the brain seems to be (almost) infinitely malleable and plastic, part of which is due to structural changes over time. For instance, in some stages of brain development, neurons grow or disappear. In some, it’s the number of connections (i.e., synapses) or dendritic branching that change. There’s also the “rewiring” around damage. As I said, my reservation concerns structural changes, going beyond simply updating connection weights. And I note that I’m not claiming impossibility here, as I actually believe that some day we will develop the algorithms and techniques both for hardware and software; rather, I’m simply pointing out that there is (at least) this one large gap that I see as a huge, pertinent difference between brains and ANNs/FPGAs.

Now that’s something that I’ve missed that would address the above. Can you throw out a particular name, paper, project, or specific lab that I might look for? (I should say that my interest here is purely familiarity and food-for-thought, nothing like opening/pursuing a line of research or an actual implemention.)

(Emphasis added)

The above seems to rely on a premise that consciousness is simply a matter of complexity.

-FrL-

Not really – clearly we have no idea what “consciousness” is or what it is about the brain that creates it. It’s just that with something relatively simple like a guy following simple rules to look something up a book or moving cards around (or a single neuron, for that matter) it seems intuitively obvious to say “No way can this be conscious or exhibit behavior that seems conscious,” and the Chinese Room concept relies on that intuition holding all the way up the chain to “Since none of the individual pieces is conscious, the whole can’t be conscious either.”

But when you get to a massively complex collection of simple things, that exhibits the ability to converse like a human being and convincingly argue that it is conscious, it’s not so easy to rely on intuition and say “That can’t possibly be conscious” anymore.

Why do I need malleable hardware when I can simply change my pointers?

A closer look at NeuroGrid:

Mine bolded. Does this not qualify as plasticity?

IIRC, IBM has made chips that can detect the kind of computation being performed and physically reorganize themselves to make the process more efficient. There is also amorphous computing.

These are separate questions. Since all software can be implemented directly in hardware, “malleable hardware” is a quantitative, not qualitative, difference (cf pseudo-parallelism or multi-tape/head Turing machines). Hence, my belief that the problem with dynamic structure (as it relates to both FPGAs and ANNs) is the same.

“Changing pointers” doesn’t fully address structural plasticity, but indicates that a structure is already extant (but keep reading before objecting). So, while you rightly point out that NeuroGrid achieves a level of plasticity unusual in most ANNs, it’s still qualitatively different than how the brain operates. As your bolded part says, only the synaptic connectivity changes.

Now, clearly, it is possible to develop a system that also creates/destroys software synapses or neurons at run-time. With appropriate algorithms, it’s just as clear to me that similar things will be possible using FPGAs. So tell me, do such algorithms currently exist? At a more foundational level, do we even have a well-defined (and accepted) theory of how such systems would give rise to “intelligence” (however that’s defined)? If yes, then I simply plead ignorance and ask for citations. I’m not familiar with the work you mention from IBM nor amorphous computing, but they seem both interesting and worthwhile, and I thank you for pointing them out. I’ll have to look into them (can you give me a keyword or somesuch to search the IBM site? It’s huge, and it’d be nice to narrow it down).

Again, to reiterate and make sure I’m clear, I’m not saying that ANNs or FPGAs can’t or won’t lead to great success. In fact, I think that they can and will (I subscribe to a pretty staunch strong AI position). But I also don’t believe in making unjustified, unqualified claims (e.g., In ten years, computers will surpass human intelligence). So, what I am saying is that, based on my understanding, the state of the art isn’t there yet. And is most likely still a long way off.

Did you see something that said something like plug an FPGA in and your problems are solved? If so, I don’t blame you for being disappointed. FPGAs are fundamentally very simple - a memory beneath, with the contents of the memory determining which of the hardware resources above get connected. You reload the memory, you get different connections. Their advantages is that they are relatively inexpensive, since all FPGAs of a class look the same, giving economies or scale,. and they are easy to reprogram. I’m basically a software person also, so since I stopped feeding punch cards into a mainframe I’m used to incrementally debugging.

Test and fault tolerance takes more hardware - no getting away from that. While it is true that in the long run all hardware fails, the question is if anyone is going to care when that happens. Your answer is different if you’re designing for a deep space probe or for the sound chip on a greeting card. I trust you are familiar with the bathtub curve. There are a certain amount of early life fails, then a long period of relatively little failure, then a period of increasing fails. We try to accelerate the early life fails by temperature or voltage stress, so weak parts fail before you ever see them. The far end of the curve is way beyond the point where you are going to care. Your grant will run out, you’ll be retired, the mechanical parts of the system like the fan and power supply will fail long before the FPGAs do. If you have a system with thousands of FPGAs then you will have to worry, and do something like be able to swap them out. But a simpler way is just checkpointing and restarting from the point of failure. I ran a simulation for my MS thesis on a PDP 11/20 which was a bit flaky. My simulator allowed checkpointing so I would basically let it run some thousands of cycles, checkpoint, then run some more thousands, etc.

My point is that you should be able to do quite nicely with off the shelf FPGAs with very little special hardware or fault tolerance built in.

The hardware resources of the FPGA are a bunch of identical hardware blocks (newer ones have RAM and even processors on the chip). I suppose these might be able to model neurons, but if they can it is just by accident. So, I can say they are pretty reliable, but not necessarily useful.

As for the reference, it is something I saw. I think it was at the Custom Integrated Circuits Conference, but I don’t see a paper that rings a bell. It was from IBM, I believe. CICC had sessions on neural networks in hardware - I was involved in it from '89 - '92, so you can see why I don’t expect too much. The conference is still going, and they probably still have FPGA sessions, and neural network ones for all I know.

The brain is extremely efficient, so sure, we should strive to create computers that are more like that for practical reasons only. I am unaware of a convincing argument that the brain is the only possible arrangement of matter in the universe that can achieve intelligence (or phenomenal consciousness, if that’s what floats your boat).

An interesting article in Seed Magazine today discussing Henry Markram and the Blue Brain project: Out of the Blue: Can a thinking, remembering, decision-making, biologically accurate brain be built from a supercomputer?

Your link is to some MIT stuff. The IBM reference might be the same as the one I saw.

The bottom line, though, is you want malleable hardware because it is faster. There is nothing you can do in hardware that you can’t do in software - and vice versa.