Apologies for the length of this response, but I feel that it required a certain level of detail. Before getting to that, I realized I made an error in a previous post when I cited Rumelhart in relation to Hopfield nets. I was actually thinking of Stuart Kauffman and his light-bulb networks. The only reason I realized it was that Kauffman just happened to be mentioned on page 3 of the ABCNews article I read this morning: Orderly Universe: Evidence of God? (Paulos’ position is “no”, for those who don’t want to read it.)
I kinda figured you had more than passing experience with FPGAs, so in general, I’m simply going to defer to your statements and judgment. However, I’d like to be more explicit about what I said – in one sentence, my conception of FPGAs prior to actually looking into them were unrealistic. Put another way, I had developed an impression of magic (i.e., sufficiently advanced technology) that was beyond the actual advances made and was thus underwhelmed.
For instance, the MTTF you bring up (as impressive as it is) is much more an engineering feature that doesn’t address the theoretical science (of FPGAs in AI, implementing ANNs in particular), which is where my interests lie. And I don’t mean to start such a debate, as the dividing line is murky (at best), nor am I denigrating engineering. Far from it, in fact; as time goes by, I’m ever more impressed with engineering successes. But, as Patterson states (in his ROC paradigm), given an arbitrarily long timeframe, all hardware will fail. Accepting that truism, the theoretical question becomes: how are such failures handled?
As an engineer, I think you’d agree that duplicating a full complement of hardware is not the preferable path to follow (even though it may be the only feasible path for the desired task and thus unavoidable). Not only does additional hardware cost real money in material, but it also adds complexity, etc., that further augment cost (even if it’s only indirectly). Concerning failure recovery, I was just fairly surprised that the solutions I came across were of two types: (1) duplicate hardware or (2) pre-designed, static layouts (that require on-chip storage, meaning more hardware). I don’t know, perhaps this is rendered (practically) irrelevant by Moore’s law, but that wouldn’t change the theoretical point. Reiterating for clarity, my being underwhelmed by FPGAs is likely more my own overblown expectations than anything else.
But all that still doesn’t address my thoughts about using FPGAs and ANNs to model the brain. More specifically, the brain seems to be (almost) infinitely malleable and plastic, part of which is due to structural changes over time. For instance, in some stages of brain development, neurons grow or disappear. In some, it’s the number of connections (i.e., synapses) or dendritic branching that change. There’s also the “rewiring” around damage. As I said, my reservation concerns structural changes, going beyond simply updating connection weights. And I note that I’m not claiming impossibility here, as I actually believe that some day we will develop the algorithms and techniques both for hardware and software; rather, I’m simply pointing out that there is (at least) this one large gap that I see as a huge, pertinent difference between brains and ANNs/FPGAs.
Now that’s something that I’ve missed that would address the above. Can you throw out a particular name, paper, project, or specific lab that I might look for? (I should say that my interest here is purely familiarity and food-for-thought, nothing like opening/pursuing a line of research or an actual implemention.)