Do we really have much idea about how neurons/brains work?

I know we understand the chemical reactions involved with how neurons fire. I also know we have some understanding of large-scale organization and functioning of the brain (what parts of the brain process which things, the idea of differing groups of neurons reaching a critical mass of “consensus” before a decision is made, etc.). But how much do we really know about how a neuron’s firing relates to a thought, an action, a stored memory?

For example, we know that, in electronic computers, we have long-term data stored in, say, magnetic media. Little bits of magnetically aligned materials, indicating an ON or OFF state. And short-term data and current operations are run through transistors, in which electricity (a) keeps the data there by refreshing it continually and (b) decide whether to change it or not based on other differing levels of electricity.

How does this work with neurons? How many different states can a neuron have? Have we ever identified the human brain’s equivalent of an AND or OR gate? How are long-term memories stored–is it a “cold” chemical storage, a la the hard disk, or is it a “hot” storage that requires continual refreshment and attention (to the data, not just to the living neuron) to make sure it doesn’t evaporate?

Your question is an excellent one but, unfortunately, that type of questioning is why I went to grad school in behavioral neuroscience and then quit very unhappily when I found out how slow the progress on these types of things are going. It just isn’t there in a big picture sense and usually not even a tiny picture sense. Thousands of researchers are making progress on their individual goals but it is no where near in the scope of your questions.

In addition, abandon the whole ideas and analogies with computers and any living creature’s nervous system. That is one of the biggest mistakes people have made over time. They have no known similarity and shouldn’t be compared at all.

They’re chipping away at it - still a long, long way to go.

Neuroscience is the big unknown. That’s why it may be the next frontier.

I’ve heard that the President of MIT has a neuroscience background and they have build a new facility to study it.

Neuroscience could be to the next generation what what computer science has been to the current generation.

Neuroscience is not well understood but a lot of very intelligent people are starting to take an interest in it. There have been a lot of advances but who knows where it will lead. That’s why basic science has to be respected and funded.

In 2006 my advisor published a paper detailing how the brain (specifically prefrontal cortex/basal ganglia interactions) may implement digital logic circuits in Science.

[deleted]

Weird, I was about to pop into this thread and say, “sure, we know how the brain works, one of my college professors explained it to me back in 2006!” and I was surprised to discover that the professor I was referring to was the lead author on the paper linked by alterego.

Anyway, I would say that a shocking amount of neurological behavior has been effectively modeled by scientists such as Dr. O’Reilly, and this is done with a minimum of hand-waving or speculation that doesn’t involve biochemically well-characterized ion channels, neurotransmitter receptors, etc. If you check out his page and literature, you’ll find plentiful examples of simulated neural networks orders-of-magnitude less capable than actual biological neural networks (in terms of the number of nodes or “neurons,” the number of relationships modeled between these nodes, etc.) that are still able to carry out some shockingly complex behaviors like object recognition independent of the field of view, rotation, or location within the field of view, etc.

A moderate number of biochemical players (membrane receptors, signal transduction pathways, etc.) combined with an enormous number of neurons combined and related to one another with tremendous flexibility can easily yield a system with more potential states than there are particles in the universe. Taking that course from O’Reilly and my other limited reading in the field has convinced me to abandon my “ghost in the machine,” or homunculus-centric views of the human brain; now I really can believe its all in the neurons.

My apologies for the scarcely directed ramblings, but the course left me with a more impressive appreciation for the human brain than technical competence in the field of computational simulation of neural networks.

Crazy!

And I’m pretty sure he only teaches it once a year max…=)

The operative work here is “may”. The experts are still largely at the stage of making (educated) guesses about how particular little aspects of the brain’s circuitry work (and what they are for).

We know an awful lot about how the brain works, but there is also clearly an awful lot we don’t know, and we are very far from fitting all the vast number of details we do know into a comprehensible big picture. Brains are very, very complicated.

[Or, in other words, what Shagnasty said. Although I think he (?) goes a little too far in dismissing brain/computer analogies. There are useful analogies there, at several levels, although it is true that in the past, and perhaps sometimes still now, they have been pushed much to far.]

If you thought, before taking the course, that we needed a “ghost in the machine” to explain the mind because the brain could not possibly be sufficiently computationally powerful to do so, and you now think that because the brain’s computational power is so vast, then fully understanding the brain’s internal workings is bound to provide a full understanding of the mind, then you did not understand the issues before and you do not understand them now.

This is very true. The use of analogy in trying to understand how cognition and brain function works is even worse than using that method to understand quantum mechanics.

We have a very simple understanding of how things work biochemically at the level of individual neurons, and occasionally some egghead in a bow tie wins a Nobel Prize for making a discovery in that vein. At the same time, such revelations highlight how little we actually understand about the process of cognition in neurological or biochemical terms. It is (if you’ll excuse the ironic attempt at analogy) like watching a single relay flipping in the middle of a gigantic telephone network, and trying to figure out what the conversation through that relay is about.

Eric Kandel’s *In Search of Memory* is a fascinating look at both this field and his personal unassuming entry and contributions to it. He explains what is really known about memory and cognition from the neuron level (not much, and he was instrumental in developing much of it) and his own initial disappointment on how little he was actually able to understand (mirroring Shagnasty’s view), and how that eventually migrated into a fascination of the niggling details of memory formation.

Stranger

We know a lot about the way the eyes are moved: which neurons encode the target of an eye movement, how they activate this and that neuron, how this ends up in an activation of the eye muscles, etc.

The reason we know so much about this system is that it is a relatively simple motor system, with only 3 degrees of freedom. Much simple than the arm, for instance !

Is that a biology class, or a computer science class, or what?

ETA: Never mind, I missed the “Psyc” at the end. So it’s a psychology class?

Yep, it’s a psychology course. The class is [deleted on request]

The general idea is: Start with biologically plausible leaky integrate and fire neurons (which controls how neurons become active), and a biologically plausible learning rule (which controls how neurons change the strength of connections or weights between eachother). Then, investigate the large scale functional connectivity of the brain. Analyze fMRI and EEG data, generally searching for cases where the brain doesn’t function correctly. For example, patient HM with medial temporal lobe damage or Stroop task subjects who experience cognitive interference. After making hypotheses about what brain areas are involved, how those brain areas are connected to one another, and what representations the neurons in those brain areas have (in other words, what computations those neurons are performing), you can create a model. After weeks, months or years of hard work your model may begin to exhibit some of the same phenomenology found in the brain. If you are successful at this - in other words you really are modelling the phenomenon - you should be able to make novel predictions about the brain that can then be investigated by other researchers.

Examples where this approach has been successful include the hippocampus and prefrontal cortex. For example, the hippocampus model very cleanly demonstrates how purely associative learning (as opposed to error driven learning - the delta rule) can accomplish both pattern separation (keeping different episodes different) and pattern completion (recalling an episode based on partial information).

We actively develop our neural simulator, emergent. If you’re interested in learning more you can install emergent and then go through the textbook simulations. All you have to do is install emergent, download a project that you are interested in, and follow the instructions that are shown in the simulator after you open the project (they are self documenting).

For more direct answers, we do know what many sections of the brain more or less do, because we can watch when and how they get activated. Beyond that, we don’t know too much. Exactly how thinknig works is not something we really understand.

We also know roughly how individual neurons work. Mostly. They fire in seuqnce, but how that sequence is determined is not really well understood. Neurons technically go on and off only, but I am given to understand they activate in patterns, and what controls this we know not. That is, they pulse or flutter and somehow tell specific neurons they connect to when to fire.

We dont know exactly how a lot of things work but that doesnt mean these things are a mystery. A lot of the models explain quite a bit.

Imagine sending a 1950’s car back in time to the early 1800s. They wouldnt know how exactly everything worked but they would be able to figure out what the parts do and how it all works together. They couldnt mass product a 1950’s car or even reproduce one, but they could still have a very, very good understanding of it. Our understanding of the brain is similiar.

:rolleyes:

The “ghost in the machine” bit was tongue in cheek, but to some extent my understanding of brain regions like the dlPFC from my earlier undergraduate training in psychology was treating, “areas of higher reasoning,” as vast bits of uncharted territory functionally interchangable with, “a chunk of neural tissue that magically gives rise to long-term planning and consciousness.” No, I did not actually believe that there was a minuscule individual sitting inside of my cerebral cortex interpreting and responding approprioately to ascending inputs.

Also, I don’t recall ever stating that the brain’s computation power was vast; I don’t think it is. I said that it was capable of using combinatorial diversity to represent a vast number of discrete states. The knock on effect of this is that it requires vast computational power to effectively simulate the behavior of even moderately sized organic neural networks.

All I’m saying is that computational methods of simulating neural networks have so far provided excellent tools to investigate this problem. Researchers in the field have made great progress in providing plausible explanations of complex neural behavior that lend themselves to disprovable and investigatable hypothesis about how the brain “really works.”

The doe-eyed line that, “the human brain is so complex that the human mind will simply never be capable of understanding its inner workings,” is an easy one I was exposed to a lot in psychology and still often see in discussions such as this one, but that story belies the excellent progress which has been made in the field. Several of the posters in this thread have provided an excellent jumping off point to further explore the field.