How similar are neuronal and transistor networks?

Both neurons and transistors act as switches and amplifiers of signals, correct? Does either do anything else?

Do neurons form logic gates like transistors do? How similar is neuroplasticity to FPGAs?

How come FPGAs cannot be reprogrammed thousands or millions of times?

FPGAs tend to be unable to attain a similar level of power/performance ratio as an ASIC for that ASIC’s given task. Since an FPGA can be reprogrammed, how come it can’t form into the same microarchitecture as the ASIC to attain a similar power/performance ratio?

The main limiting factor on computer performance increases is the power/performance ratio. What is it for brains?

A typical RAM-based FPGA can be. A flash- or EEPROM-based programmable logic device can be reprogrammed as many times as the endurance of the flash or EEPROM. FPGAs are typically (but not always) RAM-based, and get their configuration from some off-chip piece of non-volatile memory at startup. Other programmable logic (e.g. a “CPLD”, a device that’s similar in spirit but tends to be smaller, and with relatively more combinational logic and fewer registers, intended more for glue logic than “real” digital designs) is typically configured from on-chip EEPROM or flash. That’s sometimes more convenient, but the additional IC process steps to support that non-volatile storage on the same die as the logic add cost, and may not be available at all for the latest and fastest processes.

An FPGA can be configured to implement the same logic as your ASIC design. Indeed, that’s often how big ASICs are developed and tested before you spend the potentially >$1M to make a real prototype. In the ASIC, the interconnect between logic gates is metal wires. In the FPGA, the interconnect is metal wires routed to configurable multiplexers, routed through more metal wires (and perhaps more stages of multiplexers) to the actual logic. That’s more complicated and physically bigger (greater area), so the effective parasitic capacitances are bigger, so the FPGA is slower and higher-power.

The FPGA’s designer tries to structure the FPGA’s logic in a way that corresponds well to their best guess as to how you’re going to use it. Depending on the function that you’re realizing, performance might be essentially identical to that of an ASIC on a similar process (e.g., a multiplier that’s realized from a fixed-function hardware multiplier on the FPGA), or much worse (e.g., random logic).

I can’t help you with the neuroscience.

I’d have to say: they’re not very similar at all.
Digital logic is, by definition, bi-state. Neurons use multi-state signaling and the amplitude and repetition rate of the signal is important. Neuron behavior can be modeled by digital electronics, but the underlying wiring is very much different.
Neuronal interconnection is much more complicated than a typical digital circuit, also.

There are several limits on overall computational performance besides switching speed of and power consumption of an individual transistor.

Number of parallel processing elements and the degree to which code can harness these is also a major factor. Ultimately this is limited by Amdahl’s Law, even given an infinite number of CPU cores which have zero hardware synchronization overhead. In short, the potential parallel speedup is quickly “poisoned” by a small % of serializable code: Amdahl's law - Wikipedia

A specific common example of this is H.264 video encoding which cannot be significantly accelerated with GPU methods because the core algorithm is inherently sequential.

Another factor is how much instruction-level parallelism can be placed in code and the limits on extracting this in a superscalar CPU core: Instruction-level parallelism - Wikipedia
Superscalar processor - Wikipedia

Although the human brain is often compared to a computer (in fact early computers were called “electronic brains”), this is misleading and implies a similarity between transistors and neurons. It also implies that human memory is stored in a discrete subsystem like computer memory.

Decades ago, the brain was often compared to a great telephone switchboard. This arose before computers were widespread. In that era a switchboard was the most complex commonly-understood device which had some vague similarity to then-envisioned brain function.

In fact the brain is nothing like a switchboard and maybe it is less like a computer than we think. The problem is nobody really knows how memories are stored or how biological function at a neuronal or higher level gives rise to thought.

For people who are geniuses or have perfect memory, nobody knows how their brains differ to produce these higher performance levels. Therefore it is impossible to say what the limiting factors are for brain function.

Despite tremendous amounts of research, the underlying biological basis for disorders like clinical depression is not known. It is almost certainly not rooted in a neurotransmitter issue at the synaptic level. Some studies show it may be rooted in a deficiency in neuroplasticity: Stress, Depression, and Neuroplasticity: A Convergence of Mechanisms | Neuropsychopharmacology

Whoever figures that out will probably win a Nobel prize. The fact nobody has shows how limited our knowledge of brain function is, so we really don’t know what limits brain performance.

A single neuron is significantly more complicated than a single transistor. It’s more similar to, but still more complicated than, a logic gate. Like most logic gates, a neuron has one more-or-less binary output. But unlike logic gates, which typically have two binary inputs, a neuron can have many inputs, and they’re generally analog.

Rephrasing to make sure I get it: by knowing beforehand what an ASIC will do, the designer can build only the transistors, intra-logic gate wiring and inter-logic gate wiring which are required in the optimal proportions without needing to build more transistors and wiring than required. This results in greater density of used transistors per area of die which means that the signal has to do less “commuting” as it’s being processed. That commuting means more latency, power consumption, heat. Correct?
Have FPGAs been tried for adaptative heterogenous microarchitecture? I.e.: having an FPGA which can keep flipping from one arrangement to another based on what is most efficient for what it’s being asked to do.

Reading your source and a few others, I see that glutamate is both the main excitatory neurotransmitter and involved in neuroplasticity.

What can cause increases in glutamate levels? Are there activities which have been associated with an increase in neuroplasticity or glutamate levels?

More or less, but it’s not just that some of the gates aren’t used (and waste die area, thus increasing trace length, etc.). Many of the gates are used specifically to build the multiplexers that make the wiring configurable, which means that not only are they consuming die area, but they’re actually toggling, and not just sitting there unpowered. So the wiring delays are bigger, because the traces are longer, but the logic delays are also bigger, because you need the extra multiplexers that make the chip configurable. The gates in those multiplexers also burn power.

It’s pretty common to reconfigure an FPGA as an infrequent “gear shift”, like changing codecs in some video processing pipeline. That usually means the whole system stops working while the FPGA reconfigures, like for milliseconds to seconds. Some FPGAs have features to make that transition less awkward.

The FPGA synthesis tools can sometimes take logic that would usually be part of the configuration structures, and use it as part of your design. For example, a Xilinx SRL16 primitive takes a 16:1 mux and 16-stage shift register that would normally be used to implement a 4-input LUT (lookup table, and 2^4 = 16), with the shift register outputs set during FPGA configuration, and instead makes that logic available to your design. If you can fit your logic into such structures, then that makes very efficient use of the FPGA.

Finally, some FPGAs support partial dynamic reconfiguration, where logic that you design can drive the signals that configure other parts of the FPGA. That’s the most flexible option, but also very hard to design for.

A single neuron is actually closer to a very complex circuit than a single transistor.

Local spiking and preprocessing in the dendrites (at a minimum) makes a single neuron computationally equivalent to a neural network all by itself (“neural network” as in the math kind not the biological neuron we are comparing to).