The learning algorithm used in the Emergent simulator is not back propagation, it’s Leabra, a combination of error driven and hebbian learning. It’s based on the principles of Long Term Potentiation/Depression and is biologically plausible. There are several approaches to cognitive modeling; one tries to discover the general principles that guide learning, and one works to recreate the brain from the bottom up. Computational Cognitive Neuroscience attempts the former, IBM attempts the latter. These two approaches will meet somewhere in the middle.
Characterizing all neural network models as simple linear algebra transformations (SVD/PCA) is disingenuous. Henry Markram of the Blue Brain project has simulated 15 different types of neurons on Blue Gene/L and hopes to advance the resolution of his simulation to that of molecules so that he might better study gene expression and protein synthesis in neural models. The electrical properties of his models to date are almost identical to those found in brains and are on the scale of a rat brain; there is every reason to believe that he is on the right track.
Another recent advance in biologically plausible simulations of learning in humans is the PVLV system for reinforcement learning, which is an extremely detailed exploration of dopaminergic learning in the brain that follows the anatomy as precisely as possible. Unlike your claim that the more like the brain neural network models are, the worse they perform, leading researchers have found the opposite and are ramping up their efforts. Stanford’s NeuroGrid is an indication of this; while simulated neurons face the formidable challenge of a geometric increase in the communication costs of sender and receiver-based models, emulated neurons that are developed in silico are able to sidestep that challenge. Considering that they are based on the same electrical properties of the brain, I see little reason to suspect a priori that they will encounter insurmountable roadblocks. IBM’s molecular models of neural networks will discover the pieces of the low-level biology that are critical to learning, Computational Cognitive Neuroscience will discover the high level learning mechanisms, and then we can put them on chips and provide plausible constraints and guidance to evolutionary algorithms, letting that process once again figure out the details. Only this time we’ll b watching. I have yet to see evidence that a) this is not the strategy being taken and b) there are any substantial reasons to believe it will not work.