Is there a credible argument that computers cannot replicate the human brain?

The authors of that article underestimate the complexity of processors.
The article notes that a brain has 125 trillion synapses. Sounds like a lot, but even a small microprocessor has over a billion transistors. so 125 trillion synapses is 125,000 processors - well under a day’s worth of production for Intel, Id suspect. (Qualcomm makes over a million chips per day, but I don’t know their complexity.) 3 years of production, and Bob’s your uncle. I think the standard Intel part has more than a billion transistors, but the order of magnitude is clear.
Yeah, more than half those transistors are in memories, but it seems that the switches in the synapse support memories also, so it is a fair comparison.

I’d also expect that the design of the synapse is non-optimal, because evolution does not do logic minimization. That they identify 1,000 switches does not mean 1,000 switches are required.

Bigger than one computer - yes. Bigger than all computers on Earth - no.

Here is a link to information on multi-valued logic from the IEEE Technical committee on it.
I don’t know how much is being done these days, in the early '80s it was fairly active. But we’ve shrunk voltages a lot since then, and that makes MVL less and less useful.

If people look up 3D design they’ll probably get papers on stacking chips, which is being done now. There was a major effort around this about 1990, but it crashed and literally burned due to heat issues. How to test these guys is a hot topic these days.
Chips are 3D already in the sense of metal layers, though not transistors. I don’t even want to think about FiBing a true 3D chip.

There are interesting avenues to explore here, when comparing silicon to neurons. You could get on a bicycle (not to beat on a hackneyed meme) after twenty years of not touching one and rather quickly recover the ability to ride. The process appears to be somewhat similar to a computer loading a disused application from storage, though the human relearning process seems to be less instantaneous. Almost as though some of the actual skill set runs literally peripherally and must be loaded first, then copied out to where it is needed. By contrast, when electronically stored data degrades, it reaches the point where it is completely unrecoverable. Until we can create dynamic holographic storage, this will continue to be the case.

The other interesting difference between computer storage and biological memory is the metadata: in humans, the information and its structure appear to be unified, the metadata is inextricable from the actual data. We function by making associations between memories, but the connections are embedded within them.

How many layers are there in the brain system? We have become accustomed to observing discreet layers and hierarchical structures, but the flow of information through the brains is not an obvious regular pattern that we might easily reduce. Hierarchies shift with context, structures of logic adapt to the moment. In a traditional OS, we have fairly clearly defined boundaries of control and function, I suspect the similar types of boundaries in biological thought are more fluid and much less discernible.

Making a machine think like a human may not be the ideal end, but thorough study of the biological information storage and management system will be of great value – it will be at least as important as the question of raw processing capability.

Although a synapse is most likely non-optimal, from a functional standpoint it would probably require many transistors to match the value added functionality/computation.

So far your smartphone is demonstrating it doesn’t really pass the Turing test.

Yet.

It appears that humans have some sort of caching system beyond the well known short term and long term memory. And computer memory hierarchies include disks which don’t degrade over time (at least not any faster than humans do.) When we get solid state disks universally used, this will be closer to human memory in terms of not degrading. Plus, I’ve seen some studies that seem to indicate that memories get refreshed also. And the memory can get distorted if the refreshment process is interfered with, like with false memories.

I don’t know - the schema we design for a database has lots of metadata included. It is definitely a handy thing to do. The implementation is totally different, of course.

Just saying that the brain scientists who counted the number of switches don’t know the Quine-McCluskey algorithm. Yeah, you will need multiple transistors, but maybe not 1,000.

True. Conversely there are things like old phone numbers, boyfriend/girlfriend names, now-dead uncle’s occupations, etc., that if left unrefreshed in your memory for 20 years are unrecoverable even by intense effort. And it’s not just fact-like info that fades. Faces, map-like info, relationship- or connection-type info. All of it can fade to nothingness if unrefreshed.

However human memory is actually implemented, it’s got some weird and interesting higher-level properties.

When they finally understand everything that’s going on, and you include all of the computation and communication pre and post synaptic and between the synapses and the astrocytes that surround them (and all of the other things they will discover that are computing and communicating around that zone/area/structure), I think it will substantially exceed 1,000 transistors to represent what’s going on.

A simple three terminal transistor is probably just not the optimal building block if you want to emulate synapses and neurons. There is nothing to say that semiconductor designs cannot create elements with many inputs - there are already multi gate FETS, and it it isn’t exactly hard to see how a very complex field effect device could be fashioned that gets much closer to the desired functionality in a very small device. Concentrating on numbers of transistors simply isn’t the issue. Current research is leveraging existing technologies wherever it can, as that is the best way to make fast progress. Other groups are working on new processes and low level technologies (memristors for instance) but they will take time to be developed to the point where they are cost and effort competitive with existing semiconductor technologies. Fundamentally however, there are no core reasons why arbitrarily close to wet-ware functionality cannot be created. Which gets us back to the OP.

True, but we may never be able to “program” such devices in a “we know exactly what each step does” sense. We may have to end up copying natural neural systems and using evolutionary algorithms to produce “it works but no one knows why” configurations.

I have a deep suspicion you are right. The nature of artificial neural nets is already thus, and they are simpler than almost any wetware brain there is.

Maybe the only entity that will understand is a vastly more capable artificial brain that we are not worthy to specify the design of.

Neural Nets aren’t really magic, at a mathematical level they’re fairly well understood. Even with deep learning, the general answer to what an ANN is doing is basically “getting stuck in a local minimum”.

I’m not saying they’re not useful, in fact, they’re extremely good, and they produce amazing results. But the “whys” of how it works aren’t very mystical, fundamentally it’s just function optimization. When Neural Nets work poorly, we generally understand why they work poorly, a lot of it is feature selection (deciding what your inputs are). Deep Learning helps with feature selection, but deciding the exact size of your deep hidden layer is still tricky.

There was a NIPS article recently about a Neural Net that played Atari games. What it did was essentially use a Neural Net to build a supervised reactive classifier for each game. What that means is essentially it built something that said “given this input, always do this.” Unsurprisingly, this tended to work almost exclusively on twitch games – Space Invaders and the like. It was fundamentally incompetent at games that required any sort of planning. NNs just don’t really have memory or planning, an extra memory layer or something of the sort is really needed to make it work.

One project I’m working on is Speedup-learning for Monte-Carlo Tree Search. The idea is that you dynamically learn which parts of a the state space you should search for any given state. For some states, this may be a pure reactive action, like the NN work I talked about, but if you give it a strategy game, it would realize the optimal policy isn’t a purely reactive one, but rather that it needs to sit there and heuristically consider the consequences of its actions.

This still isn’t quite memory, but it’s a pretty damn good in-between to purely reactive reinforcement learning or NN policies, and boring human-optimized heuristic-driven chess solvers.

I suspect the multi-level and voting pieces could be done with some custom analog circuitry. In any case, this project would be so big, with so many components fabbed, that it would pay to design from scratch and not use an existing library.

BTW neural nets are one of the standard heuristics used for data mining. They haven’t been the best solution for any of the stuff we’ve done, but they do have some real applications. The heuristic, of course, not real neural nets.

This has been a fascinating discussion to read.
My take, is that there is no credible argument right now, but using the word credible in a question derives opinion related responses as I see it.
I think we’ll only know if we try. That has been the answer so much in history, that’s the only way to know. Some of what I think the bigger issues that haven’t been discussed as much are truly knowing what you’re trying to replicate. I think we need to know about the brain operates to define it in terms of what we need to go about replicating. If it’s simply a matter of mathematics, it can be done, it’s simply a matter of engineering, and most likely economics. (will the ends of this justify the large amount of effort needed to go into it)
We’d need to know how the brain works on both microscopic and macroscopic details to do so.
I think we also need to address the effect of external stimuli as well as the pure processing power of the brain itself. There would have to be a lot of external stimuli put it, outside the processing power issue.
Now, this is probably a different topic, but what would be the ethics involved in this? If you replicate the brain correctly, it would need the aforementioned stimuli to not become insane. Imagine, a quadriplegic, deaf, dumb, and blind person. (sounds kind of like “Johnny Got His Gun”) Anyway, I’d to hear folks’ take on that. If this should be a separate thread, I apologize.

We should be able to replicate the inputs to a brain long before we replicate the brain itself. Look up ‘brain-computer interface’ and ‘neuroprosthetics’.